OptEQ – Optimized System Tuning

by Kent Margraves

OptEQ – System tuning and equalization in a logical, accurate and repeatable process…

OptEQ Class Photo

Attendees of the OptEQ workshop at the American Airlines Center in Dallas, with instructors (left to right) Pat Brown, John Murray and Deward Timothy up front.

AS THE NAME IMPLIES, “OptEQ”is an optimized sound system tuning process developed by SynAudCon (Synergetic Audio Concepts), and it was presented to an audience of more than 100 consultants, contractors and end users over the course of two days in early January at the American Airlines Training Center in Ft. Worth. I was fortunate to be one of those attendees.

As usual, the workshop was brilliantly prepared, hosted, and presented. Pat and Brenda Brown of SynAudCon are simply the best in the pro audio business at delivering exceptional education and training in both audio theory and practical application. The course manual, loaded with relevant information,analogies, and useful graphics, is worth the cost of the workshop alone.

Another highlight was the projection system. In addition to displaying the coarse presentation, also displayed were audio measurements (live and stored) and signal processing interface(s). An atrium adjacent to the classroom housed a variety of loudspeakers utilized for real-time measurement demonstrations. All attendees were provided with wireless stereo audio delivery (via Shure PSM personal monitor systems) in real time, allowing us to hear the loudspeaker measurements while seeing them displayed on screen (GUI) and also visualized (another screen) via camera.

Thus as measurements were made in the atrium – with manual movement of the measurement microphone position – attendees in the classroom heard the measurement in real time as well as saw the measurement data captured on screen, with the camera link helping us understand where the measurement mic was positioned at each moment.

Primary presenter Pat Brown was joined by two guests that also have decades of experience with optimizing sound systems, John Murray of Optimum System Solutions (Woodland Park, CO) and Deward Timothy of Poll Sound (Salt Lake City). As a result, OptEQ covered a lot of ground, and a lot of it fairly complex. The subject of tuning sound systems is daunting in itself, and presenting it to a roomful of widely experienced and educated practitioners is quite a task. There were numerous lively, spirited discussions.

It all starts with this sticky question: what is EQ? It’s a commonly and often loosely used term, with several arguable definitions. Pat provided some thoughts on the subject in the previous issue (February 2015 LSI), and I encourage you to re-read it. For the purposes of this workshop, “EQ” meant the equalization process of adjusting the balance of frequency components of a sound system, and the presenters aimed to explain and demonstrate this as a measurable and repeatable process.

The workshop began with a series of universal measurement principals including microphone placement, measurement data management and post processing. There was also much on loudspeaker design, boundary affects, and room functions such as room modes (and a lot more). Only then did we get into the actual OptEQ process with demonstrations of equalization and lots of information on filter types and crossovers.

It would take the entire issue of this magazine to detail the presentation, so instead, let’s “unpack” some of the key aspects. Also note that the techniques were demonstrated as platform-independent workflows as opposed to specific measurement platforms, so users can apply the OptEQ process with Rational Acoustics Smaart, Meyer Sound SIM, AFMG SysTune, Gold Line TEF, and so on.


Photo of seminar attendees

The workshop was presented in both classroom (shown here) and lab (shown below) formats.

What does really matter in system optimization? As presented at the workshop, the first step is to balance the response of the direct field through proper equalization, independent of environmental and venue factors, and this process can be done whether the loudspeaker is onsite
or not. Further processing to manage “in situ” factors (such as boundaries, multiloudspeaker coupling, room modes, etc.) is performed as additional layers of processing. What really matters is linear transfer – that is, a loudspeaker system that reproduces its input signal accurately. As a professional mixer of many years, I can certainly appreciate any time I’m working with a system where my mixing tweaks translate transparently to the audience.

This was then furthered by a focus on time. Time is that integral and often confusing component of sound. Fast Fourier transform (FFT) and other modern measurement systems allow us to separate time and frequency, and by selecting a proper time window for an FFT measurement, we’re able to see only the direct field response – because it arrives first. This is sometimes tricky. Pat noted, “As we lengthen the time window, we transition from science to art.” That is, we transition from observing the direct field response (pure science) to seeing the whole sound in situ. This reinforces the OptEQ idea that the direct field response, balanced first, is science. What we might do after that is a combination of science, experience, and preferences.

FFT measurement systems were used for demonstration (mostly SysTune and FIRCapture) with several stimuli such as sweeps and pink noise. But it was reinforced that the OptEQ principles, again, may be applied while using any chosen  modern measurement platform. We did look at RTA data, mostly to point out its limited abilities, but FFT transfer functions were the usual measurement type. A transfer function uses a test signal (pink noise, sweeps, sometimes even music) and passes it through the system, thencompares the result with the original test signal and displays the difference in various data views. Much time was spent investigating IR (impulse response) and ETC (envelope time curve) data. Learning to view and interpret both forms of data is crucial in modern sound measurement work, as they reveal a wealth of detail about time and frequency.

The biggest benefit to me was in receiving, from experts, “tips & tricks” on measurement workflow – and seeing the work performed on the fly. As a result, I’ve modified my own measurement setup and tuning workflow a bit as a result of the demonstrations.


The lab equipped with systems and test gear.

The lab equipped with systems and test gear.

What’s valid and useful? It depends on each measurement, and the specific goal. When seeking the direct sound with popular FFT type systems, using a proper length FFT size/time window is key. If that’s incorrect, it’s really a waste of time.

This translates to knowing what to equalize. By understanding what data is useful, and as importantly what data is not, we can understand what to work on “fixing” with equalization (and what to ignore). And what matters really does vary by application. For instance, attempting
to equalize a response aberration that is caused by comb filtering (time), such as a late boundary reflection, is a futile attempt and simply can’t be done.


Historically, real-time analyzers (RTAs) have been used for seeing the frequency spectrum of sound. These are “time blind” – still useful for certain applications, but modern measurement systems now take time into account.

The workshop helped clarify time windowing. That is, using an appropriate time window for the FFT measurement. Done correctly, a time windowed measurement allows inspection of the direct sound arrival, excluding the (later arriving) reflections and other venue acoustical properties, while maintain appropriate data resolution. We reviewed the inherent trade-off between length and resolution, which is frequency dependent, and much discussion was had about manually choosing the best window length for various measurement applications.

We also looked at MTW (multi-time windows) and their anatomy, and noted that each measurement platform on the market handles these a little bit differently. Instead of using one window length broadband, these essentially use a different time window length for each range of frequencies. The clincher is that the user must understand the trade-offs here, and select the parameters that produce the most meaningful results for whatever he or she intends to measure at the moment.

Just two of the numerous illustrations in the course manual that clarify concepts and serve as a handy reference tool later.

Just two of the numerous illustrations in the course manual that clarify concepts and serve as a handy reference tool later.


Room modes were investigated and discussed as well. While this is really an acoustics topic, it’s related to OptEQ in that there are various approaches as to how to handle room modes in sound system equalization. Some schools of thought say to do nothing, because they’re an acoustic issue that is not “fixable” with EQ – and they aren’t directly caused by loudspeaker systems.

But another approach, which was demonstrated, is to notch the sound system precisely to avoid exciting such room modes. In this case, one has to measure precise data to know where in frequency to place the notch filters. In particular, the Dr. (Paul) Boner method of room mode notching, which goes back nearly half a century, was presented and reviewed by John Murray. And that’s rather amazing considering the measurement tools that existed at the time.

The phrase “tune the room” has been used for decades, but it’s really not possible without a bulldozer, as it requires a physical change in the room’s volume, shape, and interior surfaces. All we can tune with processing is the sound system itself, but it has been shown that we can do so in a way that doesn’t “trigger” undesirable room issues.


It was also interesting to hear the presenters explain their own workflows in the shop and in the field, particularly Deward Timothy’s practice of pre-equalizing loudspeakers in the shop for correct direct field frequency response first and then processing further as appropriate once installed in the venue (alignment, coupling, reflections, room modes, and other situational influences).

There were also a lot of opportunities to talk with peers.

There were also a lot of opportunities to talk with peers.

This relates to the OptEQ idea that the main equalization process is strictly science and independent of room acoustics, and that, mostly excluding time alignment, further processing as it relates to the venue environment and application becomes more of an science-art preference experience blend. The former is teachable and doesn’t require a lifetime of experience to conquer, while the latter most certainly does. (If we could only measure and optimize our loudspeaker systems outdoors, it would be often be far less complex! Alas, room acoustics are a big piece of the behavior of sound.)

One thing many of us admire about Pat is his ability to create clever analogies to convey complex topics. For example, his analogy at the workshop on gradients: “It may be warmer in Florida and than in Indiana, but the temperature doesn’t change at the state line.”

I’ve attended numerous SynAudCon seminars and workshops over the years, and all are myth-free and employ expert, highly experienced presenters. This is furthered by the opportunity to network with industry colleagues, meet new ones, and share audio “war stories.” It’s truly benefited my work, and OptEQ was no exception.

I’m hopeful that Pat and Brenda will present another OptEQ workshop in the near future. And if so, all I can say is “don’t miss it” if you wish to further your understanding of the vital topic of sound system measurement and tuning. ■


KENT MARGRAVES began with a B.S. in Music Business and soon migrated to the other end of the spectrum with a serious passion for audio engineering. Over the past 25 years he has spent time as a staff audio director at two mega churches, worked as worship applications specialist at Sennheiser and Avid, and toured as a concert front of house engineer. He currently works with WAVE in North Carolina and can be contacted at kent@wave.us.