Monday, June 29, 2009

Storage/Database Tuning: Whither Queuing Theory?

I was listening in on a discussion of a recent TPC-H benchmark by Sun (hardware) and its ParAccell columnar/in-memory-technology database (cf recent blog posts by Merv Adrian and Curt Monash), when a benchmarker dropped an interesting comment. It seems that ParAccell used 900-odd TB of storage to store 30 TB of data, not because of inefficient storage or to “game” the benchmark, but because disks are now so large that in order to gain the performance benefits of streaming from multiple spindles into main memory, ParAccell had to use that amount of storage to allow parallel data streaming from disks to main memory. Thus, if I understand what the benchmarker said, in order to maximize performance, ParAccell had to use 900-odd 1-terabyte disks simultaneously.

What I find interesting about that comment is the indication that queuing theory still means something when it comes to database performance. According to what I was taught back in 1979, I/Os pile up in a queue when the number of requests is greater than the number of disks, and so at peak load, 20 500-MB disks can deliver a lot better performance than 10 1-GB disks – although they tend to cost a bit more. The last time I looked, at list price 15 TB of 750-GB SATA drives cost $34,560, or 25% more than 15 TB of 1-TB SATA drives.

The commenter then went on to note that, in his opinion, solid-state disk would soon make this kind of maneuver passé. I think what he’s getting at is that solid-state disk should be able to provide parallel streaming from within the “disk array”, without the need to go to multiple “drives”. This is because solid-state disk is main memory imitating disk: that is, the usual parallel stream of data from memory to processor is constrained to look like a sequential stream of data from disk to main memory. But since this is all a pretence, there is no reason that you can’t have multiple disk-memory “streams” in the same SSD, effectively splitting it into 2, 3, or more “virtual disks” (in the virtual-memory sense). It’s just that SSDs were so small in the old days, there didn’t seem to be any reason to bother.

To me, the fact that someone would consider using 900 TB of storage to achieve better performance for 30 TB of data is an indication that (a) the TPC-H benchmark is too small to reflect some of the user data-processing needs of today, and (b) memory size is reaching the point at which many of these needs can be met just with main memory. A storage study I have been doing recently suggests that even midsized firms now have total storage needs in excess of 30 TB, and in the case of medium-sized hospitals (with video-camera and MRI/CAT scan data) 700 TB or more.

To slice it finer: structured-data database sizes may be growing, but not as fast as memory sizes, so many of these (old-style OLTP) can now be done via main memory and (as a stopgap for old-style programs) SSD. Unstructured/mixed databases, as in the hospital example, still require regular disk, but now take up so much storage that it is still possible to apply queuing theory to them by streaming I/O in parallel from data striped on 100s of disks. Data warehouses fall somewhere in between: mostly structured, but still potentially too big for memory/SSD. But data warehouses don’t exist in a vacuum: the data warehouse is typically physically in the same location as unstructured/mixed data stores. By combining data warehouse and unstructured-data storage and striping across disks, you can improve performance and still use up most of your disk storage – so queuing theory still pays off.

How about the next three years? Well, we know storage size is continuing to grow, perhaps at 40-50%, despite the re cession, as regulations about email and video data retention continue to push the unstructured-data “pig” through the enterprise’s data-processing “python.” We also know that Moore’s Law may be beginning to break down, so that memory size may be on a slower growth curve. And we know that the need for real-time analysis is forcing data warehouses to extend their scope to updatable data and constant incremental OLTP feeds, and to relinquish a bit of their attempt to store all key data (instead, allowing in-situ querying across the data warehouse and OLTP).

So if I had to guess, I would say that queuing theory will continue to matter in data warehousing, and that fact should be reflected in any new or improved benchmark. However, SSDs will indeed begin to impact some high-end data-warehousing databases, and performance-tuning via striping will become less important in those circumstances – that also should be reflected in benchmarks. However, it is plain that in such a time of transition, benchmarks such as TPC-H cannot fully and immediately reflect each shift in the boundary between SSD and disk. Caveat emptor: users should begin to make finer-grained decisions about which applications belong with what kind of storage tiering.

2 comments:

hard drive data recovery cost said...

Excellent post I was checking constantly this blog and I’m inspired! Very helpful info specially the closing phase Ideal with such information a lot. I used to be looking for this certain information for a long time. Thanks and best of luck…

Mike Stanford said...

Data recovery itself is very good and I think that when we run a company and use business software it is worth having such an option. I recently started working with a very good company https://grapeup.com/about-us/ and I am of the opinion that they have a very nice approach to creating applications working in cloud computing.