Stan Hope-Streeter Posted October 5, 2009 Share Posted October 5, 2009 This, or something like it, is said frequently on this forum. But I would like to know why. Obviously, a square wave has a higher mean:peak ratio than a sine wave. But why would a speaker care whether a noise it is called upon to reproduce is distorted because the input stage of a power amp is clipping, or Jimi Hendrix is playing a distorted guitar, or a synth has picked a square wave patch? Surely if the drive to a speaker is of appropriate bandwidth (no DC or LF for a mid-range drive unit, and no excessive HF energy being supplied etc) and the mean and instantaneous power is within spec, then the speaker won't care what the signal is? What is it about an overdriven DJ mixer that makes it any worse than Limp Bizkit? In effect, the speaker's power rating is different for different types of programme. It's much higher for undistorted (or only occasionally clipped) music with a high crest factor, than it is for a continuous synth tone or a sinewave from a signal generator. And much lower for a clipped sinewave or a squarewave from any source. The manufacturer's rating is usually for undistorted music and a non-clipping amplifier. If they declared a true rating for the heavily compressed and clipped signals from the average DJ-driven console, nobody would buy the speakers. Link to comment Share on other sites More sharing options...
olly DMT Posted October 14, 2009 Share Posted October 14, 2009 some come on then whos got the oldest speakers ** laughs out loud **... Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.