Compressive Sampling is a new field of sensing theory that sidesteps the traditional Nyquist sampling limit. The Nyquist limit is an information theory result that says to perfectly capture a signal which is bandwidth limited at a certain frequency, X, you must take 2 times X samples per second. So for example, if you want to perfectly detect a 20Hz sine wave, you must take 40 samples a second. To perfectly represent audio that humans can hear (typically understood to be sounds in the 20-20,000Hz range) we must use 40,000 samples per second (40kHz). CD audio uses 44.1kHz, and professional audio systems typically sample at 48kHz a second. [Note that the number of samples per second has nothing to do with the quantization, or accuracy of the samples, so audio that is sampled at 20 bits per sample has more information (and less noise) than audio sampled at 16 bits per sample.]
However, anybody who has listened to compressed audio, (such as MP3 or ACC files) knows that we can safely throw out any samples that do not contribute to the information in the signal. Lossy compression algorithms such as MP3, MPEG4, and JPEG typically throw out a large percentage of the (unnecessary) information in a signal, and leave behind only the information that humans think is important. Many signals (audio, music, speech, images, movies) are very compressible, which means that they contain redundant data, or data that is not necessary for a human to reconstruct the original meaning.
However, our current recording technology still performs a full sampling of such data. Audio recorders sample at 48kHz (typically to WAV files) and then encode the signal to a (lossy) MP3 to save space. Cameras capture a full grid/array of pixel data (expensive cameras can save this as a RAW image) and only after the capture use JPEG compression algorithms to shrink the image before storing it to flash memory.
The magic of Compressive Sampling is that it allows the sensing and compression steps to be combined. Using compressive sampling, a physical capture device can record less information by only sampling the signal as many times as is needed to re-construct a lossy (compressed) version of the signal. In fact, for many signals, it is possible to sample randomly and reconstruct a lossy representation of the original signal. For example, if you wanted to generate a compressed 100×100 pixel image, you could sense only 5,000 random pixels instead of collecting the fully array of 10,000 pixels.
Details and more information can be found at this website at rice university on compressive sampling.