Byte[] read and write

Feb 22, 2012 at 11:58 PM

Unless i am missing something there is no support for reading and writing byte[] unless i convert it to a stream.  It would be great to see support for byte[]'s added as apart from saving streams i sometimes need to save binary serialised objects and data that is already in a byte[].


Feb 23, 2012 at 12:26 AM


You are right. For now, there is not direct byte[] methods to read/write data in FileDB. It's because the first use for FileDB is store files in stream and use no all-in-memory data. But use byte[] could be a usefull methods. I will add these methods. While this, you use MemoryStream to do that.

I'm working on a new project to persist POCO C# objects in collections (like RavenDB/MongoDB), in a single file (like FileDB). But will be full searchable, using LINQ. Probably later will be the right tool for you.

Feb 23, 2012 at 12:48 AM

In my case generally my objects are serliased using ProtoBuf and these byte[] are actually what i will be returning to web requests/  I do not need to deserialise the object or anything like that.  I also have a relational DB that i will use for meta data to GUID searching for these byte[]s but i don't want to store the byte[] in the DB due to max DB size restrictions on cheap online hosts.  Most cheap hosts have unlimited bandwidth and disk space but only a few hundred meg of MS SQL DB space.  


Due to this i plan to move most of my data to a disk based DB with very low memory usage and use the SQL DB to search meta data to find the item i want in the file based DB's.


 One more question. If i am storing over 100k items of 4 meg or less in your file based DB what is the optimum number of items per file DB for it to still read at the best speed possible.  I am expecting concurrent requests for this data and it needs to be as fast as possible.  

Feb 23, 2012 at 1:18 AM

If you have 100.000 x 4Mb you will have 400 GB in a single file. I never tested FileDB with a single file that big. I had 2 big tests:

- A sigle FileDB with a 1 milion photos - small size each photo (~15 Kb).

- A sigle FileDB with 50 GB size - 140 videos with 350Mb each.

In all cases, add more items or search a items has no expressive diference with a small FileDBs. Read/Write/Search (using GUID) it's very fast beacuse use a b-tree in a random GUID. As you can see on storing new item, you can't set your Guid number. Guid will be generate in FileDB to balance the b-tree.

My tips: try to keep your files less than 16Gb for OS/NTFS reasons. Don't worry about how many itens you have in a FileDB. 100k or 1m will be not more than 20 or 30 loops in b-tree. For a better concurrency control, create many FileDBs with some rule. For example: 1 file for all users that start with "a", 1 for "b".... Using 26 FileDBs is not too much files. If you expect much more concurrent users you can split in more files

Feb 23, 2012 at 2:04 AM

In my case it would be a Max of 4MB and generally a lot less than that.  I was more interested in the speed when dealing with a large number of records/files which it seems it handles very well.  In my case i will have many threads reading but only one writing and writing is quite rare.

It seems like your system is perfect for my use case and i can't wait to start playing with it and testing it out.  Great job on it by the way.  Very useful project.