![]() If it goes over about 1TB per day then I like to use dual raids which are 4 or 6 drive thunderbolt configurations. Normal means something from 400 to 700GB per day. If you are shooting normal amounts of material then the 1 raid configuration should be no problem. But generally it is much faster to make at least one copy to raid or ssd very fast and then just manage other fast copy so that you dare to reuse the card. A slow computer may cause additional delays when using verified transfers like on silverstack. Your biggest bottleneck tends to be the backup drives which are much slower than the camera card. ![]() Any other recommendations would be appreciated since I have very little experience in the field but would love to make my job and the camera department's job as easy as possible! I'm still unsure if the Stardom DR5-WBS3 is going to be that much of an upgrade, but I believe Silverstack might be. Is cascade copying a good option to free card faster? and, in a worst case scenario, is it possible to do without Silverstack? will building something like a RAID 0 or RAID10 on the Stardom connected through a USB3.0 protocol be faster than the lacie rugged (also connected through USB3.0 protocol)? I back up everything to lacie rugged disks, they run quite slow and average 70-80MB/s transfer speeds, I need to backup 2-3 drives per card and tend to always make backups from the card which makes freeing them take much longer. I'm trying to fix this by getting Silverstack installed on the machine. No dedicated DIT software, I'm offloading with Davinci's Clone Tool which keeps me from doing anything during offloads. Here's the two main issues I'm running into: I'm given a macbook pro (2017/2018) to work with but have recently gotten access to a Stardom DR5-WBS3 which I think may help. You could set a 16k buffer on the underlying stream, and read individual data values with the BinaryReader with ease.Hi, I've been asked to help out as a DIT for an upcoming student short film, although I'll be mostly just offloading and transcoding files I've tried to investigate since this will be my second time doing it and I'd love to speed things up a bit from last time. It also allows you to decouple your buffer sized from the actual data you are working with. These allow you to work with binary data very easily, without having to worry much about the data itself. If you are doing streaming data processing, I would look into the BinaryReader and BinaryWriter classes. I would read the length of the string, then create a buffer to read the whole chunk of string data at once. For example, if you are reading binary data that contains a 4-character code, a float, and a string, I would read the 4-character code into a 4-byte array, as well as the float. Even better, if you are streaming data that has structure, I would change the amount of data read to specifically match the type of data you are reading. ![]() ![]() On the other hand, if you are processing the data in streaming fashion, reading a chunk then processing it before reading the next, smaller buffers might be more useful. I would probably use 8k or 16k, but probably not larger. If you are simply reading data one chunk after another entirely into memory before processing it, I would use a larger buffer. Now, generally, its not a good idea to have a really huge buffer, but, having one that is too small or not in line with how you process each chunk is not that great either. You need to set a buffer size that fits the behavior of your algorithm. Generally, there is no "one size fits all" buffer size.
0 Comments
Leave a Reply. |