wp-php-toolkit/bytestream
最新稳定版本:v0.7.5
Composer 安装命令:
composer require wp-php-toolkit/bytestream
包简介
ByteStream component for WordPress.
关键字:
README 文档
README
| slug | bytestream | |||
|---|---|---|---|---|
| title | ByteStream | |||
| install | wp-php-toolkit/bytestream | |||
| see_also |
|
Composable streaming primitives for reading, writing, transforming, hashing, and compressing byte data. Pull/peek/consume semantics let parsers backtrack without copying, and deflate, inflate, and checksum filters snap together like Lego.
Why this exists
PHP's native streams are powerful but inconsistent. fread on a socket may return short reads with no warning; stream_filter_append is awkward to compose; gzip helpers and file handles expose different APIs. The ByteStream component normalizes these behind one small interface — pull / peek / consume — so a parser, a hash function, and a deflate filter all see the same shape.
The split between pull (buffer up to N bytes) and consume (advance past N bytes) is the secret. Parsers can peek ahead to detect a record boundary and decide whether to consume, without copying or allocating.
Read a file in chunks
The canonical loop. pull(N) reads up to N bytes from the underlying source into an internal buffer and returns how many ended up there; consume(N) reads N bytes from that buffer and advances past them. The buffer never grows beyond the chunk size you ask for.
<?php require '/wordpress/wp-content/php-toolkit/vendor/autoload.php'; use WordPress\ByteStream\ReadStream\FileReadStream; $path = tempnam( sys_get_temp_dir(), 'demo' ); file_put_contents( $path, str_repeat( "log line\n", 200 ) ); $reader = FileReadStream::from_path( $path ); $total = 0; while ( ! $reader->reached_end_of_data() ) { $n = $reader->pull( 256 ); if ( 0 === $n ) break; $total += strlen( $reader->consume( $n ) ); } $reader->close_reading(); echo "Read {$total} bytes in 256-byte chunks.\n";
Read 1800 bytes in 256-byte chunks.
MemoryPipe as write-then-read buffer
MemoryPipe is bidirectional: you append_bytes() as a writer and pull/consume as a reader. Easiest way to wire one component's output into another's input.
Gotcha: A producer must call close_writing() when done — otherwise the consumer eventually throws NotEnoughDataException instead of seeing EOF.
<?php require '/wordpress/wp-content/php-toolkit/vendor/autoload.php'; use WordPress\ByteStream\MemoryPipe; $pipe = new MemoryPipe(); $pipe->append_bytes( "first chunk\n" ); $pipe->append_bytes( "second chunk\n" ); $pipe->append_bytes( "third chunk\n" ); $pipe->close_writing(); while ( ! $pipe->reached_end_of_data() ) { $n = $pipe->pull( 1024 ); if ( 0 === $n ) break; echo "got: " . $pipe->consume( $n ); }
got: first chunk
second chunk
third chunk
Compress on the way in, decompress on the way out
Wrap a stream in DeflateReadStream to get compressed bytes out; wrap it in InflateReadStream to get decompressed bytes out. Both are full ByteReadStream implementations, so they nest into anything else that takes a stream.
<?php require '/wordpress/wp-content/php-toolkit/vendor/autoload.php'; use WordPress\ByteStream\MemoryPipe; use WordPress\ByteStream\ReadStream\DeflateReadStream; use WordPress\ByteStream\ReadStream\InflateReadStream; $original = str_repeat( "the quick brown fox. ", 50 ); $src = new MemoryPipe( $original ); $src->close_writing(); $deflated = new DeflateReadStream( $src, ZLIB_ENCODING_DEFLATE ); $compressed = $deflated->consume_all(); $src2 = new MemoryPipe( $compressed ); $src2->close_writing(); $inflated = new InflateReadStream( $src2, ZLIB_ENCODING_DEFLATE ); $round = $inflated->consume_all(); printf( "original : %d bytes\n", strlen( $original ) ); printf( "deflated : %d bytes (%.1f%%)\n", strlen( $compressed ), 100 * strlen( $compressed ) / strlen( $original ) ); printf( "round-trip: %s\n", $round === $original ? 'OK' : 'BROKEN' );
original : 1050 bytes
deflated : 45 bytes (4.3%)
round-trip: OK
Line-by-line reads from a chunked source
Reading text by line means handling chunk boundaries that fall mid-line. Keep the trailing partial line and prepend it to the next pull. The rest of the loop pretends the data was always whole.
<?php require '/wordpress/wp-content/php-toolkit/vendor/autoload.php'; use WordPress\ByteStream\MemoryPipe; $pipe = new MemoryPipe(); $pipe->append_bytes( "alpha\nbravo\ncharl" ); $pipe->append_bytes( "ie\ndelta\necho\n" ); $pipe->close_writing(); $tail = ''; $count = 0; while ( ! $pipe->reached_end_of_data() ) { $n = $pipe->pull( 8 ); if ( 0 === $n ) break; $buf = $tail . $pipe->consume( $n ); $lines = explode( "\n", $buf ); $tail = array_pop( $lines ); foreach ( $lines as $line ) { printf( "[%d] %s\n", ++$count, $line ); } } if ( '' !== $tail ) { printf( "[%d] %s\n", ++$count, $tail ); }
[1] alpha
[2] bravo
[3] charlie
[4] delta
[5] echo
Limit a stream to a fixed window
LimitedByteReadStream exposes only the next N bytes of an underlying stream as if those were the entire stream. This is how the ZIP decoder hands you the body of one entry without letting you read into the next.
<?php require '/wordpress/wp-content/php-toolkit/vendor/autoload.php'; use WordPress\ByteStream\MemoryPipe; use WordPress\ByteStream\ReadStream\LimitedByteReadStream; $source = new MemoryPipe( "HEADER:42|BODY:hello there|FOOTER:done" ); $source->close_writing(); $source->pull( 10 ); $source->consume( 10 ); $body = new LimitedByteReadStream( $source, 16 ); echo "body sees: " . $body->consume_all() . "\n"; echo "remaining in source: " . $source->consume_all() . "\n";
body sees: BODY:hello there
remaining in source: |FOOTER:done
统计信息
- 总下载量: 46.61k
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 0
- 点击次数: 1
- 依赖项目数: 7
- 推荐数: 0
其他信息
- 授权协议: GPL-2.0-or-later
- 更新时间: 2025-09-06