Thx a lot! I think I understand now.
I conclude "Read-ahead reads" and "physical reads" are both physical
reads, but using an another alghoritm/system! So, it should have less
confusing for me if MS developers would have written .... "regular
physical reads #", "read ahead physical reads #". You can have one
without the other and you can have both when executing a query.
In this link I read now ...
http://technet.microsoft.com/library/Cc966464 ... read-ahead occur
with 64-KB I/Os, which provide higher disk throughput potential for
the disk subsystem than 8-KB I/Os do. When a large amount of data must
be retrieved from SQL Server, read-ahead is the best way to do it.
Read Ahead Manager benefits from the simpler and more efficient Index
Allocation Map (IAM) storage structure.
One strange thing I see now in my performance tests ...
I have a table with 5.000.000 rows with a clustered index on a
identity column.
-- *1
select top 100 * from table order by id asc most of the time results
in 0 physical reads and 3000 to 5000 read ahead reads.
-- *2
select * from table where id between 28000 and 38000 most of the time
results in 1 physical read and about 3333 read ahead reads.
-- *3
select * from table where id between 2500000 and 2500100 most of the
time results in 1 pysical read and about 33 read ahead reads
--
Since I have about 3 rows per page ...
*3 needs 33 + 1 page (does not read more than needed)
*2 needs 3333 + 1 page (does not read more than needed)
*1 starts at first row of the clustered index ... doesn't need much
traveling to know the location of the first page and then reads much
more than required, since I just need 34 pages and it reads +3000
pages ... meaning it gets about +10.000 rows in RAM of which I only
need 100 for my query.
That seems a lot to me ... reading 24Mb while I just needed about
250Kb being read to have the required results.
This means SQL kicks out +2970 other pages out of memory. OK,
possibly I'll need the pages being read-ahead faster than then the
ones being kicked out ... but this also means 24Mb of (not required)
traffic between our SAN - switches - Virtual host to virtual guest.
Probably okay for a physical server with local disks, but I wonder
whether this is beneficial (regarding performance) in our highly
virtualized infrastructure ...
Kind regards
Peter