Hi,
Could anyone show some lights on the following doubts:
1. What is the determining factor of randomness or sequentiality of a certain type of I/O from an application? Is it determined by the application itself? I mean is it programmed to do random or sequential on purpose or it was just a nature of the usage of the application. For example, OLTP write by SQL are mostly to be random, this is not because of that SQL server is programmed to do so, but the requested data is not sequential (the nature of usage). Is that correct?
2. How to understand "random" or "sequential"? It doesn't make sense for an application to R/W to addresses which are dispersed all over the disk intentionally. So, for new write, I think the disk controller should assign storage capacity by a sequential order assuming its an empty disk initially and consumed by this application only. Correct me if I was wrong.
3. Let's set IOmeter to issue 100% random R/W i/o, it appears as random is because iometer R/W from/to dispersed addresses intentionally, it is programmed to behave like that. So that the i/o will cause disk seeking, which make the i/o seems to be random. But for a general application, when it writes to an empty disk, all the new data should go to the disk addresses sequentially, unless it overwrites existing data which causes disk seeking. Is my understanding correct?
In summary, random or sequential, is actually from the disk point of view. Application doesn't aware it, it doesn't know how to issue a type of random or sequential i/o, unless you programmed it by purpose, just like IOmeter. The reason why we would see random or sequential i/o profile from a certain type of application is because the nature of its usage, it was asked to retrieve data located at dispersed addresses (LBA -> CHS). Saying xxx application issues random or sequential i/o is wrong.
All of the above are my personal guess and need you expert's confirmation. I will be very appreciated!
Ah_Chao|| MCSE,VCP,EMCSAe