Kafka
→ Kafka streaming
- Documentation
- State store aka ktable can be tuned
- how rocksdb works
→ About rocksdb
- forked as CockroachDB / yugabyte
- forked from leveldb
- maintened by meta
- written in c++, biding in java, rust, go
- embeddable database
- key value pair, more exacly bytes array pairs
- ge/put/merge/delete also data iterator for scan
- LSM-Tree
- memtable to get the incoming data until 64MO limit reach
- WAL (Write Ahead Logs) to avoid data lost during crash
- SST (Static Sorted Table) can be compressed (snappy, zstd, gzip...)
- Offset Index map (to allow binary search in compressed file)
- optional Bloom filter : to make lookup keys don't exist faster
- space/read amplification: each flush to disk adds files to be merged
- compaction creates a new level
- compaction can cascade accros levels
- k-way merge strategy (to merge multiple level of files)
- reads traverse the whole level (memtables -> all level 0 -> target levels N -> bloom/index + read the block)
- merge operation (=read-modified-write): thread safe read+put+DELETE
→ Kafka connect
→ secrets
There is a mechanism to provide/implement secrets in kafka-connect:
We could easily provide a EnvConfigProvider which look into linux env variables for values.
→ Kafka auth
This page was last modified: