Guitar | Bass | Keyboard | Microphones | Mixers | Audio Interfaces | Sequencers & Software Plugins | Live Sound & PA | Drums | Club & DJ | Accessories | Blowouts

A Technical Look at Sampling Theory in Non-Technical Terms

The "right" audio intrerface for your studio depends on your CPU, OS, motherboard, and the robustness of its drivers. If we all put our heads together we might be able to deal with making a decision more intelligently. Use all advice given here at your own risk. This forum is only for Firewire and USB (1.1 and 2.0) interfaces.

Moderator: Tweak

A Technical Look at Sampling Theory in Non-Technical Terms

Postby Rigosuave on Wed Oct 04, 2006 7:48 pm

There is an old adage derived from the Bible that goes something like this: “No one is a prophet in their own land.” Never does this seem to be truer than when you have to old friends discussing audio and the merits (or lack of) of recording software, platforms, hardware or sample rates. Allow me to explain. I have a really good friend who is just as much into recording as I am. The problem is we both learned the art/science of recording via two totally different routes. As such about the only thing we can agree on is to disagree. For example, he’s a staunch Mac user – primarily because he’s very adept at making PC’s crash. I neither prefer the PC nor the Mac – because yes, believe it or not, Mac’s will also crash. I prefer the combination of a standalone hard disk recorder and a computer in the loop (be it a Mac or PC) for the system’s superb stability and powerful editing capabilities. He prefers Digital Performer and Sonar while I’m partial to Pro Tools and Adobe Audition. And, up until recently, we even disagreed on what sampling frequency to use.

Platform and choice of software, in general, are merely a matter of individual preference. Now yes, there are some differences that might make a piece of software better or easier to use for a given task, but in terms of final sound quality, that’s more dependent on the person driving the software, how good his or her ears are, and how skilled they are. The fact of the matter is that it really doesn’t matter much what platform or what sequencing/recording software you use – especially if your working in the digital domain, because in the end your still manipulating the same 1’s and 0’s as everyone else. The same cannot necessarily be said for choice of sampling frequency.

The primary motivation for this article is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192 kHz. This misconception, perhaps propagated by industry salesmen, is built on false premises, contrary to the fundamental theories that made digital communication and processing possible.

Background Info

First allow me to lay down a foundation. Dr. Nyquist is credited with discovering one of technology’s fundamental building blocks – the sampling theorem. It was discovered while Nyquist worked for Bell Labs. The Sampling Theory is defined as follows: A sampled waveform contains ALL the information without any distortions, when the sampling rate exceeds twice the highest frequency contained by the sampled waveform.

The notion that more is better may appeal to one's common sense. When presented with analogies such as more pixels for better video, or faster clocks to speed up computers, one may be misled to believe that faster sampling of audio will yield better resolution and detail. The analogies are wrong. The great value offered by Nyquist's theorem is the realization that we have ALL the information we need with 100% of the detail, and no distortions, without the burden of "extra fast" sampling.

Audio Bandwidth

Without getting too technical, Nyquist pointed out that the sampling rate needs only to exceed twice the signal bandwidth. What is the audio bandwidth? Research shows that musical instruments can produce energy above 20 kHz, but there is little “sound energy”, if any, at 40 kHz. Most microphones do not pick up sound at much over 20 kHz. Human hearing rarely exceeds 20 kHz, and certainly does not reach 40 kHz. All of the above suggests that 88.2 kHz or 96 kHz would be overkill in terms of sampling rates. In fact all the objections regarding audio sampling at 44.1 kHz, (including the arguments relating to pre-ringing of an FIR filter) are long gone when you increase sampling to about 60 kHz.

Many people report that they get better sound at higher sampling rates. There is no doubt in my mind that the folks that like the "sound of a 192 kHz" converter hear something. But, it clearly has nothing to do with more bandwidth: the instruments make next to no 96 kHz sound, the microphones don't respond to it, the speakers don't produce it, and the ear can not hear it. If you like what you hear and want to use it, go ahead. But whatever you hear it’s not due to energy above audio frequency. It’s all is contained within the "lower band". It could be that certain types of distortions sound good to you.

Moreover, we hear of reports that more of that “special quality” is captured and retained by 192 kHz when down sampling to 44.1 kHz. Such reports neglect the fact that a 44.1 kHz sampled material can not contain audio above 22.05 kHz.

Some claim that 192 kHz is closer to the sound produced by audio recorded onto tape. That same tape that typically contains "only" 20 kHz of audio gets converted to the digital domain by a 192 kHz A/D converter and is then stripped of all possible content above 22 kHz when it’s down sampled to conform to CD standards.

Issues at 192 kHz

Some of the issues encountered is that sampling at 192KHz produces larger files requiring more storage space thereby slowing down the transmission of the audio. Sampling at 192 kHz produces a huge burden on the computational processing speed requirements. There is also a tradeoff between faster sampling and loss of accuracy. The compromise between speed and accuracy is a permanent engineering and scientific reality. Speed related inaccuracies are due to real circuit considerations, such as charging capacitors, amplifier settling and more. Slowing down improves accuracy. So as not to get too technical, I would refer the reader to the article “Sampling Theory for Digital Audio” written by Dan Lavry, Lavry Engineering, Inc. This provides a more detailed explanation of how sampling speeds affects accuracy. The article can be found at the Lavry Engineering website.

Now, can it be that someone made a really good 192 kHz device, and even after down sampling the audio contained fewer distortions? Not likely. The same converter architecture can be optimized for slower rates and with more time to process it should be more accurate (less distortions).

The danger here is that people who hear something they like may associate better sound with faster sampling, wider bandwidth, and higher accuracy. This indirectly implies that lower rates are inferior which is simply not true. Whatever one hears on a 192 kHz system can be introduced into a 96 kHz system, and much of it into lower sampling rates. That includes any distortions associated with 192 kHz gear, much of which is due to insufficient time to achieve the level of accuracy of slower sampling.

Conclusion

Sampling audio signals at 192 kHz is about 3 times faster than the optimal rate.
It compromises the accuracy which ends up as audio distortions. While there is no up side to operation at excessive speeds, it definitely has its disadvantages:

1. The increased speed causes larger amount of data (impacting data storage and data transmission speed requirements).
2. Operating at 192 kHz causes a very significant increase in the required processing power, resulting in very costly gear and/or further compromise in audio quality.
3. There is an inescapable tradeoff between faster sampling on one hand and a loss of accuracy on the other.

The optimal sample rate should be largely based on the required signal bandwidth. Audio industry salesman have been promoting faster than optimal rates. The promotion of such ideas is based on the fallacy that faster rates yield more accuracy and/or more detail. Weather motivated by profit or ignorance, the promoters, leading the industry in the wrong direction, are stating the opposite of what is true.
User avatar
Rigosuave
Member
Member
 
Posts: 17
Joined: Thu Aug 31, 2006 10:08 pm
Location: Arizona

Postby Marco on Wed Oct 04, 2006 8:44 pm

The primary motivation for this article is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192 kHz.


Great article Rigosuave, but I disagree that there are "widespread misconceptions " about sampling at 192 KHz.

The question comes up in this forum occasionally and I don't think I've seen anyone speak in support of 192 khz sampling. Even the clerk I talked to at my local Guitar Center said "Forget about 192, you don't need it."

Where are the widespread misconceptions?

Just because the manufactures are hyping 192, doesn't mean people are buying the hype.

The question for me is: Why are they pushing 192khz?

--Marco
First they ignore you, then they laugh at you, then they fight you, then you win. -- Mahatma Gandhi
User avatar
Marco
Member
Member
 
Posts: 95
Joined: Mon Aug 15, 2005 10:59 pm
Location: South Carolina Sea Islands

Postby Rigosuave on Wed Oct 04, 2006 9:01 pm

Perhaps I was a little "overzealous" in my use of the term "widespread misconceptions". The point I was merely trying to make is, as you also point out, that you don't need high sample rates to replicate an analog audio EXACTLY like the original. I'll probably catch some flack for this statement but, per the Samply Theorem, all you really need is 44.1 kHz. I'm hoping you'll agree with me though that a lot of new hardware is being marketed with respect to how high a sample rate they can provide and that less informed people buy into the statement ("myth?") that higher is better.
User avatar
Rigosuave
Member
Member
 
Posts: 17
Joined: Thu Aug 31, 2006 10:08 pm
Location: Arizona

Postby owel on Wed Oct 04, 2006 10:06 pm

>I'm hoping you'll agree with me though that a lot of new hardware is being marketed with respect to how high a sample rate they can provide and that less informed people buy into the statement ("myth?") that higher is better.


It's a numbers game. The higher the number, people think it's better.

From a manufacturer's point of view, it's easier to buy an AD chip from TI or whomever with 192Khz capability and it becomes automatically a marketing feature. It's understandable. I guess it can be considered "truth in advertising" since they're using a 192K chip.


I'll probably catch some flack for this statement but, per the Samply Theorem, all you really need is 44.1 kHz.


Maybe for a CD... but DVD audio is at 48Khz :)

So if the project will end up on DVD audio, your sampling rate should be 48Khz at least.
User avatar
owel
Forum Moderator
Forum Moderator
 
Posts: 8775
Joined: Fri Sep 27, 2002 12:10 pm
Location: 36.0° N 86.8° W

Postby Tweak on Wed Oct 04, 2006 11:58 pm

Brilliant article
User avatar
Tweak
Resident TweakHead
Resident TweakHead
 
Posts: 29177
Joined: Sat Sep 21, 2002 2:08 am
Location: USA

Postby Marco on Thu Oct 05, 2006 11:55 am

I'm hoping you'll agree with me though that a lot of new hardware is being marketed with respect to how high a sample rate they can provide and that less informed people buy into the statement ("myth?") that higher is better.


I do indeed agree with you. If fact, when I was just getting started with home recording I was one of those people who assumed that the higher sample rate was better. That is, untill some posters in this forum set me straight.

What bothers me is that someone just starting out might try to record with a 192 sample rate and then have problems with running out of computer resources, and other problems you described. So it seems to me the manufacurers could be causing problems for their customers because of the 192khz hype.

They need to be brought to task for this, and your article helps do that.

--Marco
First they ignore you, then they laugh at you, then they fight you, then you win. -- Mahatma Gandhi
User avatar
Marco
Member
Member
 
Posts: 95
Joined: Mon Aug 15, 2005 10:59 pm
Location: South Carolina Sea Islands

Postby Hybridrummer on Mon Oct 09, 2006 5:46 am

Ok, although the main point of your article is very accurate in terms of the 'need' for higher sampling rates, your understanding of nyquists theorm is a bit slanted. Nyquists theorm states that in order to capture the full bandwidth of whats being recorded, the sampling rate should be at least 2x's the given frequency being recorded. The keyword here is AT LEAST. Sampling is purely "snapshots" taken of the audio waveform being recorded. A sampling 'rate' of 44.1 Khz means that it is taking those 'snapshots' at 44.1 thousand times per second. The limitations of a 44.1khz sampling rate is that your highest frequency being recorded still conforming to the theorm is 22.05khz. This is where the fidelity issue comes in. If you are looking at catching a better 'picture' of the original waveform, a higher sampling rate is your next step. How high?...obviously 192khz is going over the line....or is it? For most uses it is over the line, but some DVD audio makers I know chose to sample specific audio at 192khz becaues it captures higher resolution of the original waveform, translates well when downsampled and compressed to DVD format than standard 44.1khz audio. Again, this is purely the preference of the person behind the wheel, but when they "hear something" they like, it is because most of the supersonic harmonics not captured by lower sampling rates makes it into the end result.

Storage is definitly an issue. Most people can't afford over a Terrabyte of storage, and more-over that's a lot of De-frag time to maintain everything! But, it all depends on your need. I'm not advocating the marketing trend of hyping the need for faster and faster sampling rates, rather clarifying what sampling rates can do for those who need to know.

Finally, the accuracy of which you speak being the downfall of higher sampling rates is all dependent on the quality of the clock. This is why companies are selling stand alone master clocks like the Big Ben, and a few others. With the speed and processing power of new chip technology, not to mention the affordability of a DIY PC based system done right (always check compatibility issues before buying software, soundcards, etc) there is no reason why one could not build a high sampling rate system that is stable/reliable. Note: master clocks are not on the cheap side! (neither are quality converters!!!).

All in all, the need for 'higher' sampling rates (still not advocating 192 for most needs) should only be for those who truly understand every aspect of sound that they are trying to capture. There is definitly a bell curve that most of us fit in because of the human hearing range, but that doesnt mean that others can't benefit from the advantages that higher sampling rates have. Most of us just simply don't 'hear' the difference nor need it.
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Mon Oct 09, 2006 6:02 am

Here is a link to an article that helps clarify my statements about Converters, and Nyquists Theorm. Read it carefully :D

http://www.eepn.com/Locator/Products/ArticleID/33507/33507.html
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Mon Oct 09, 2006 6:15 am

Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Rigosuave on Wed Oct 11, 2006 1:42 am

Hybridrummer wrote:If you are looking at catching a better 'picture' of the original waveform, a higher sampling rate is your next step.


Actually Hybridrummer, now I must disagree (if I understand your comments correctly. I apologize in advance if I've misunderstood you.)

Your statement about being able to catch a better picture of the original waveform by sampling at a higher rate is inaccurate. What the Sampling Theory states is the following: A sampled waveform contains ALL the information without any distortions, when the sampling rate exceeds twice the highest frequency contained by the sampled waveform.

You can't get a "better picture" of your waveform by sampling at a higher rate because by merely sampling at a rate that exceeds twice the highest frequency contained by the sampled waveform - which for all intents and purposes is limited to our human perception to about 22 kHz, you're able to replicate your original waveform EXACTLY. You can't get any better than that. If I can dig up my old Electrical Engineering references, I'll post them for anyone who cares to go through the math. I've even proven this to myself in the lab on many occasions.

regards,
Rigosuave
User avatar
Rigosuave
Member
Member
 
Posts: 17
Joined: Thu Aug 31, 2006 10:08 pm
Location: Arizona

Postby Hybridrummer on Wed Oct 11, 2006 2:59 am

I dont mind you disagreeing at all, its only that I've studied the nature of sound and synthesis for over two years now, and from the information you and I can both agree on, leads me to beleive that I must stand my ground on what my understanding of the Sampling THeorm is. They concept of Digital Sampling in and of itself is the same as the "sample and hold" function found in a variety of synthesizers of new and old. Understanding how sampling is actually performed would only further prove my argument that you CAN get a better picture at slightly higher sampling rates (I dont disagree that 192 is rediculous, and only good for capturing a cpu's noise as well), but common sense and logic both point one to come to the conclusion that taking more slices of the cake actually get you closer to the cake as a whole. There might be some technical exceptions that I am not aware of hidden in the development of Nyquists Theorm, so if anyone knows what those might be, please enlighten everyone so no more confusion gets spread about Nyquists's Theorm, which seems to be the remaining issue.

I hope I am not coming across as rude, I just want to get to the bottom of our two different interpretations of what the Theorm states. I take it my links didnt help any?
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Wed Oct 11, 2006 3:02 am

THis is a pretty good article on sampling which might prove to be usefull if you havent read it already... http://www.gcmstudio.com/rtech/rtech.html
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Wed Oct 11, 2006 4:19 am

I just researched this entire issue of Nyquists Theorm, and I disturbingly find a hint of plagiarism in the word usage and structure of your post Rigosuave. A link to Wikipedia's explanation of aliasing (after reading much of the mathematical schematics applying to Nyquist's actual components of the Theorm) I saw yet another link to a paper written by Dan Lavry. I began reading his paper, and I think it is safe to say that you derive much of your information either from this paper, or indirectly from the teachings of someone who's curriculum is based on this paper. No matter, I just find it very odd. Here is the link, and if you scroll down to the bottom, you will find text beside the link to Dan Lavry's article which says that he opens his article with an incorrect statement about Nyquist (either himself or the theory..not sure). http://en.wikipedia.org/wiki/Aliasing

The conclusions are even almost exactly the same...perhaps you just forgot to give credit?...or you are Dan Lavry :?
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby adhesive on Wed Oct 11, 2006 12:15 pm

How often do any of us read an article and then write something, incorporating our newly acquired wisdom into the response? I do it all the time. He did link to the lavry article, by the way.
User avatar
adhesive
Top Contributor
Top Contributor
 
Posts: 2108
Joined: Wed Mar 03, 2004 6:10 pm
Location: Sweet Home Arizona

Postby montsamu on Wed Oct 11, 2006 12:59 pm

Hybridrummer wrote:The conclusions are even almost exactly the same...perhaps you just forgot to give credit?...or you are Dan Lavry :?


Forgetting to give credit?

So as not to get too technical, I would refer the reader to the article “Sampling Theory for Digital Audio” written by Dan Lavry, Lavry Engineering, Inc. This provides a more detailed explanation of how sampling speeds affects accuracy. The article can be found at the Lavry Engineering website.
User avatar
montsamu
Gold Member
Gold Member
 
Posts: 296
Joined: Thu Sep 07, 2006 1:33 am
Location: Durham, NC, US

Postby Hybridrummer on Thu Oct 12, 2006 4:36 am

Thanks for pointing that out , In my flurry to correct his misundertanding of sampling, I skimmed over the post, my sincere apologies!!!
I still must point out all of my statements and links to facts about sampling that improve upon your post Rigosuave. Sampling and how the Theorm regulates it is very clear when the process itself is understood. Think of the way steps look when drawn two dimentionally on paper. Each step of increasing level represents a point at which a sample is taken. Now of course we must apply the (correct) Nyquist Theorm (which i've linked to relentlessly), which states that the minimal sampling rate required to accurately represent the waveform without causing aliasing, must be at least twice the highest frequency of the signal being sampled. That in mind, the number of samples, or "steps" as I have analogized, directly contributes to the resolution (accuracy, quality, etc) of the original waveform. This is not to say that such high sampling rates are beyond the average need of most humans ears, nor are great gains acheived by such sampling, but there can be no argument that higher sampling rates (when executed under the proper conditions) yield a better resolution of the original waveform. I have looked at the math that is involved with Nyquist's Theorm, and essentially a LOT of number crunching (Delta functions, Fourier algorithms, etc) make up the distance between sample points (quite accurately at that!, of course when Nyquists Theorm is applied). Again, I must stress that my whole point for arguing your post is that Nyquist's Theorm does not prove that the MAXIMUM needed resolution is obtained at merely twice the highest frequency being sampled, rather, to accurately represent the waveform, the sampling rate MUST BE AT LEAST twice the highest frequency. (now what the highest frequency might be in terms of music is another debate since some humans can and do hear above (and below) the average peak of 20khz.)

Please excuse the misunderstanding of where you derived the information for your post. As a college student, I have been relentlessly drilled with the proper techniques of citing sources and quoting passages, and it was not properly excercised in your post. Call me a perfectionist I guess :oops: 8)
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Thu Oct 12, 2006 4:39 am

Adhesive, there is a difference between integrating newly acquired information 'in your own words', and plagiarism. I dont know about all of you, but plagiarism is not taken lightly in the educational systems that I have been/am going through.
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Darkhorse on Fri Oct 13, 2006 1:49 pm

Hybridrummer wrote:common sense and logic both point one to come to the conclusion that taking more slices of the cake actually get you closer to the cake as a whole. There might be some technical exceptions that I am not aware of hidden in the development of Nyquists Theorm, so if anyone knows what those might be, please enlighten everyone so no more confusion gets spread about Nyquists's Theorm, which seems to be the remaining issue.



Hey HD, I'm a college mathematics professor and Rigosuave's post is spot on. Contrary to what common logic dictates, your assumptions about more slices of the cake getting you closer to the cake as a whole are incorrect. The point Rigosuave is trying to make is that you DON'T need anymore slices to get you to the entire cake exactly as it was prior to the slicing. Not an approximation of the cake mind you - I mean the cake EXACTLY as it was before "slicing". And yes, there are more technical/mathematical reasons (I replaced your word "exception) behind this that you may not be aware of. Ever taken Fourier Transform Calculus? Since you're a college student, go see one of your Electrical/Computer Engineering professors and ask them to explain this.
User avatar
Darkhorse
Newbie
Newbie
 
Posts: 1
Joined: Fri Oct 13, 2006 1:30 pm

Postby Hybridrummer on Sat Oct 14, 2006 4:11 am

Thanks, I took Trig my first year, and I understand the Math enough to know that the majority of the calculations used to "approximate" the original waveform between samples is just that, an approximate. I must say, the math is darn near perfect, but imagine how much easier the calculations become when more accurate information (higher number of samples per second) are introduced into the equations. There is a lot less mathematical guesswork and approximations between samples that truly represent the waveform as it naturally occurs. I'm not going to argue this any more, as I have already consulted my Audio professor David Dow, who has been in the Industry for over 30 years, is more than familiar with the nature of sound, and has been instructing classes specific to sound synthesis as well. Furthermore he is a well trained musician and graduate with various degrees pertaining to audio. I will definitly look into Fourier Transform Calculus, the more knowledge the better! Again, I want to stress that I am not saying that great increases in quality/clarity/resolution/etc are gained with higher sampling rates, but that there is in fact a direct relation between sampling rate and quality, etc, etc, even as it pertains to the Nyquist Theorm (which in my opinion, as well as my professor's) supports common logic.
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Viktor on Sat Oct 14, 2006 5:54 pm

@Rigosuave and @Darkhorse,

sorry guys, you're both wrong!
User avatar
Viktor
Member
Member
 
Posts: 7
Joined: Sat Jul 15, 2006 9:27 am

Postby fragie on Sat Oct 14, 2006 6:47 pm

Am I correct if I asume that you are claiming that if I have a perfect sinus at 1 kHz, I can make an exact copy digitally, if I sample it at 2 kHz?

I reeeeally hope not! Then you guys definitly need to go get your school money back! Im "only" almost done studying to become a bachelor in science, elektronics engineering, but unless I really misunderstod the message that some of you guys are sending, Im already laughing!

Let me give you a hit at what you will be recording in the above example:
At best: a 1kHz triangle wave at a continues amplitude larger than 0 and less or the same as the original sinus.
At worst: Nothing at all.

For the record: I record at 48 kHz, and nothing in my signal chain makes it fair to sample at a higher rate for me, but I have no problem understanding why people with nice gear would want a higher samplerate.
- fragie
User avatar
fragie
Silver Member
Silver Member
 
Posts: 133
Joined: Tue Oct 03, 2006 9:36 am
Location: Denmark - Odense

Postby WuTangChan on Tue Oct 31, 2006 1:24 am

Getting away from the arguments on how much cake you need and slices and all that, so far I haven't seen anything written on analog filtering, which is a pretty important part of the ADC and DAC process. A lot of the benefit that people hear when listening to something recorded at 44.1k versus 96k are the types of filters/roll-offs that can be used to rid the audio of alias tones. Without getting too technical, the analog signal needs to be filtered before it goes digital to remove any frequencies that would be aliased, and also after it is converted back to analog (called a reconstruction filter). Being analog, these filters aren't perfect and either have slow roll-offs or quick roll-offs with ripples and other distortions. If the filter has a lower roll-off, it will either miss some of the alias frequencies or it will need to have a cut off that is closer and closer to the highest represented frequency in order to sufficiently roll off the alias frequencies. A lot of times this means the filter is actually rolling off things at the top end of the audible spectrum.

If the roll-off is quick, you get ripples and other imperfections you don't want.

By sampling at much higher rates than is necessary to represent a waveform accurately, one can use a slower roll-off with a cut-off at a frequency that is well above the audible spectrum and that easily takes out the alias tones.

Even if you upsample something that was originally done in 44.1k or even 48k there is a pretty noticeable difference in quality, and nothing about the original signal has changed...only your filters will be different.
Last edited by WuTangChan on Tue Oct 31, 2006 5:45 pm, edited 1 time in total.
User avatar
WuTangChan
Member
Member
 
Posts: 16
Joined: Tue Sep 14, 2004 12:54 pm

Postby Hybridrummer on Tue Oct 31, 2006 4:08 am

Good stuff Wutang, I dont know too much of the complexities of the filters, but I can only assume that with higher priced converters, come better/more usefull filters as well. I would like to also stress that Bit-Depth is very important in terms of quality, so anyone reading all of this should keep that in mind, its not just about sampling rates.... :wink:
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Hybridrummer on Tue Oct 31, 2006 4:12 am

I think it would be very innovative if more levels of control could be implemented in the smaller steps of A/D D/A conversion, such as being able to chose different types of roll-off filters to suit the needs of the project.
Image
Live Life to Learn!
My Gear]
My Pics]
User avatar
Hybridrummer
Gold Member
Gold Member
 
Posts: 483
Joined: Thu Nov 03, 2005 3:37 am
Location: California

Postby Farview on Tue Oct 31, 2006 10:43 am

The point that keeps getting glossed over is that once you have an exact representation of the signal, you can't get any better than that.

It doesn't matter if you cut the cake in 4 pieces or 100, you still get the same cake when you put it back together.


You are also leaving out oversampling.
Jay Walsh
I am no longer here at this forum, neither are any of the others who help. JOIN US ALL HERE!
User avatar
Farview
Forum Moderator
Forum Moderator
 
Posts: 6226
Joined: Wed Apr 20, 2005 12:03 am
Location: Chicagoish

Next

Return to Firewire and USB Audio INTERFACES

Who is online

Users browsing this forum: No registered users and 1 guest