If you have 100.000 x 4Mb you will have 400 GB in a single file. I never tested FileDB with a single file that big. I had 2 big tests:
- A sigle FileDB with a 1 milion photos - small size each photo (~15 Kb).
- A sigle FileDB with 50 GB size - 140 videos with 350Mb each.
In all cases, add more items or search a items has no expressive diference with a small FileDBs. Read/Write/Search (using GUID) it's very fast beacuse use a b-tree in a random GUID. As you can see on storing new item, you can't set your Guid number. Guid
will be generate in FileDB to balance the b-tree.
My tips: try to keep your files less than 16Gb for OS/NTFS reasons. Don't worry about how many itens you have in a FileDB. 100k or 1m will be not more than 20 or 30 loops in b-tree. For a better concurrency control, create many FileDBs with some rule. For
example: 1 file for all users that start with "a", 1 for "b".... Using 26 FileDBs is not too much files. If you expect much more concurrent users you can split in more files