RedisKVStore request size #12442
-
If the underlying Redis server has restrictions on the request size, such as the Vercel KV store, is there any way to specify the
Looking at the implementation of |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hey @anirbanbasu, great to see you back! Hope you're doing well. 😄 Based on the information you've provided and the context from similar issues, it appears that the To handle large data sizes by splitting them into smaller chunks, you would need to implement a custom solution. This could involve creating a wrapper around the
Here's a simplified example of how you might start implementing chunking for the def put_with_chunking(self, key: str, val: dict, collection: str = DEFAULT_COLLECTION, chunk_size: int = 1024 * 1024):
"""Put a key-value pair into the store, with chunking for large values."""
serialized_val = json.dumps(val)
chunks = [serialized_val[i:i+chunk_size] for i in range(0, len(serialized_val), chunk_size)]
for i, chunk in enumerate(chunks):
chunk_key = f"{key}_chunk_{i}"
self._redis_client.hset(name=collection, key=chunk_key, value=chunk)
# Store metadata about the number of chunks
self._redis_client.hset(name=collection, key=f"{key}_metadata", value=json.dumps({"chunks": len(chunks)})) This example demonstrates the concept of chunking large values for storage. A similar approach would be needed for the As for the maximum data size that the I hope this information helps. If you have further questions or need more clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
Hey @anirbanbasu, great to see you back! Hope you're doing well. 😄
Based on the information you've provided and the context from similar issues, it appears that the
RedisKVStore
component in LlamaIndex does not currently have a built-in feature for automatically handling large data sizes by splitting them into smaller chunks for transmission. Thebatch_size
parameter in theput_all
method is primarily designed to improve the efficiency of transmitting multiple operations in a single network call, rather than chunking individual large data items into smaller parts for storage and retrieval.To handle large data sizes by splitting them into smaller chunks, you would need to implement a cus…