
Justin Kennedy
Audiovisual Specialist
I am an Audiovisual Specialist with experience in commercial and educational sectors. I was the A/V lead for the School of Music at Western Carolina University. Prior to that, I designed, programmed, and installed systems as part of Baltu Studios. In addition to college degrees, I hold Crestron and DANTE certifications, as well as a C++ coding certificate from Skillshare. I have completed AVIXA CTS coursework and will take my CTS exam in April 2026.
I believe the ultimate aim of audiovisual technology is the communication of ideas. The technology itself should never be the focus. It should be easy for users to share and understand ideas seamlessly. When I do my job well, my work is meaningful but invisible. The audience, presenters, or performers are instead engaged with the content, subconsciously confident in the technology.
Portfolio
Portfolio
Interactive A/V Museum Exhibit
In Harmony was an interactive audio-visual installation housed inside a geodesic dome. Sitting in the middle of the dome was a table with a clear surface and a webcam at its base. Museum visitors moved "game pieces" across the table to activate sound for a 2nd-order ambisonic system, which used 9 speakers. They could alter the audio spatialization, filtering, and volume as well as trigger hidden sounds. Visitors were also able to change the lighting behavior of hundreds of LEDs.
For this installation, I was:
-
Pitch Deck Writer
-
Project Manager
-
Programmer (one of three)
-
Co-Designer
-
Composer
-
Sound Designer
-
Installation Technician (one of five)
-
Technical Writer
I wrote and presented the pitch for this project to the IDEA Museum of Mesa, Arizona. We were grateful to earn a contract with them. After the pitch, I assumed project management duties, which included creating the budget, keeping the project on-time, and ensuring development operated within budget constraints. I used agile scrum and Kanban methods to manage the project and coordinate the design, software, hardware, and operational teams.
I was one of three people responsible for CRM (Customer Relationship Management). I worked with the museum curator and his staff, while my teammates worked with the museum director and her administrative team.
Click the image above to view photo gallery.
I co-designed the installation alongside Gino Ceresia, Rex Witte, and Peter Costa. We worked with HVAC specialists to ensure the installation was well ventilated. We calculated the exhibit's voltage and relayed that information to electricians working with the museum. Accessibility was an important consideration for the In Harmony project. The table always had two spots available for wheelchair access and the table was low enough for a person in a sitting position to operate it. The installation space could be navigated along the perimeter with no obstructions, so those with vision impairments could navigate it. There were step stools in the installation for small children, but only on two ends of the table, and they were removable. Since this was an installation intended for families, it had to be child-proofed. Rex Witte took charge of that duty.
Once the designs were finalized, I set about creating the exhibit's music. The composition was pandiatonic and its duration was controlled by the users. I also did the sound design.
Together with my colleagues Gino Ceresia and Paul Diaz, I coded the installation's stand-alone A/V software application. The app is owned by Baltu Technologies, with whom I was employed at the time. That deliverable was licensed to the IDEA Museum.
After completing the installation and software, my team and I devised a simple 4-step boot-up / power down procedure. This was made possible through the automation of system tasks. I wrote the technical documentation for this, which you may view on this site. Once the documentation was complete, I trained museum employees to power the exhibit on and off.
In Harmony was open to the public from June 2018 through November 2019. Though we had a debugging and repair clause in our contract, and though our team would check-in on the installation via site visits, none of it was necessary as there was not one technical problem during its entire lifespan.
Technical writing document with illustrations for boot-up and power down procedure.
Classroom A/V Solutions
The video above examines my programming of a Q-SYS A/V classroom system of my own design. Users control audio and visual aspects of the system by interacting with a touchscreen.
The touchscreen offers mixer control of a stereo system. The available audio sources are a Shure MX412 podium mic, a room PC, and signal from a BYOD (bring your own device) received through HDMI input.
The visual source options include a ceiling-mounted NC 12x80 camera, the class PC display, and the BYOD display. Users may control the PTZ (pan-tilt-zoom) of the NC 12x80. In addition, users may use the touchscreen to unfurl or hide the retractable projection screen.
You may see the system design and routing by viewing the Q-SYS Designer screenshot on this page.
One feature highlighted in this system, as stated in my Q-SYS video, is the synchronization of the touchscreen’s classroom computer fader with the PC's volume control displays. Such a task is a bit tricky since these two volume scales are different. Q-SYS, which drives the touchscreen, uses dBFS so its volume scale is -100 to 0. Though the Windows PC displays volume on a scale of 0 to 100, the psutil library I used to control the PC's

Click once to enable zoom. Click again to disable zoom.
volume display uses a scale of 0.0 to 1.0–with 0.0 outputting no sound, 1.0 being maximum loudness, and the decimal values in between being all other levels. Take a look at the code snippet on this page to see how I accounted for this. The snippet specifically shows how to send volume info from the Window PC (via psutil) into Q-SYS.
I first thought to add this volume sync feature, along with my implementation of automated microphone muting, during the COVID-19 pandemic. I worked at a university during that time. The pandemic made lecturing with A/V system tools especially difficult because classrooms had to be
socially distanced and filled only to half capacity. So, during a class, only half the students would be physically present, with the other half attending on Zoom.
To make matters worse, if teachers played audio or video examples, their examples would only send sound to the classroom speakers or Zoom, not both. To adjust for this, I implemented a 4-channel output solution in Western Carolina University's classrooms, most of which used Extron technology and Mac computers. This allowed teachers to send streaming audio out of the speakers while simultaneously sending that same streaming audio over Zoom. This solved a big problem, but also created a new one.
When instructors streamed audio to both classroom loudspeakers and Zoom attendees, speaker bleed caused an echo effect. Manual muting while switching applications was inefficient, and traditional gating proved unreliable due to widely varying speaking volumes of different lecturers. Instead of a gate, I coded a solution that monitored operating system media activity, as demonstrated in my Q-SYS portfolio video. I used the Selenium library to detect media state changes and transmitted control data to each system’s control environment using standard TCP/IP networking. The code “watched” play controls on Mac and Windows systems. “Play” muted the mic, while “pause” or “stop”unmuted it.
Live Event Support & Livestreaming

I oversaw all live event support and livestreaming in the School of Music at Western Carolina University. I was responsible for hiring, training, scheduling, and supervising a team of student engineers for live productions and broadcasts. My team and I supported hundreds of events per year, providing reliable A/V support and livestreaming content that generated over 5,000 hours of watch time in a single year. This extended the reach of the university’s programs and served as a means of advertising for the school.
Pictured here is a hall at Western Carolina University. It is one of several spaces I regularly reconfigured for guest presentations, open houses, and live performances. During the pandemic, its capacity made it a primary lecture venue, requiring reconfiguration of the A/V systems to support large-format and hybrid instruction. Its calendar after 2020-21 demanded efficient turnover between events, along with scheduled system diagnostics and preventative maintenance.
The most complex productions I oversaw were semesterly in-studio concerts co-produced with colleagues Ethan King and Matt Binford in the Center for Applied Technology (CAT). This center is a multi-million dollar recording and TV facility at WCU. It has a large live room, an audio control room, two music production suites, two isolation booths, a broadcast studio, and a second control room for broadcasting.
I was responsible for end-to-end oversight of these in-studio concert productions. My shared duties included planning, scheduling, and troubleshooting, as well as managing input lists, load-in, routing, patching, session setup, click tracks, talk-backs, vision setup, headphone monitoring, and strike.
Students were broken up into groups and assigned to different facilities within CAT. We placed three groups in the live room, one in each production suite, and usually at least one vocalist in an ISO booth. Input lists usually hovered between 36 and 48. Pictured is a routing diagram I made for a broadcast in 2022.

Images of control rooms from a 2022 concert broadcast.

Click to enable zoom. Click again to disable zoom.
Below is one of the livestreams. I have timestamped it to start on the second-to-last song because the last two musical works are so different. This highlights one of the production challenges, since production techniques vary by genre. The second-to-last piece was an R&B song performed live in two separate spaces (a production suite and an ISO booth), and the last work was an avant-garde multimedia piece, described by a peer as, “…listening to a Twilight Zone episode while watching the musicians make the soundtrack.” In between the works, you will see me acting as emcee, which was another one of my duties.
VR & Spatial Audio
I have worked on multiple virtual reality projects in edutainment, EdTech, experiential learning, and music. Through these processes, I developed skills in 360 video editing and ambisonic audio production. I also gained proficiency in Unity gaming engine.
The first video shown on this page is the trailer for an educational VR game, entitled Museum in Orbit. It was made for the Phoenix Natural Discovery Center. I storyboarded and directed the trailer, which was
featured on Arizona PBS’s television show Arizona Horizon. Additionally, I did sound design, composed music, and wrote C# code for the game. I also helped write and present pitches for client meetings.
Our client asked us to make a virtual museum so users could walk around and view artifacts. My team and I suggested the artifacts feature gamified interaction mechanics. In addition, I proposed putting the museum in a wondrous environment rather than an ordinary one.
The second video on this page is a 360 video of a string quartet performance. For this, I did audio post-production using iZotope RX, Reaper, and spatial audio plug-ins. The video was encoded for YouTube, which supported first-order ambisonics. This allowed VR headset users to look around the environment and hear sound spatialization change with head movements. Computer viewers could click the navigational compass in the top-left corner of the YouTube video to “look around.”
In 2017, I was production assistant for a VR film entitled On the Frontlines of the Border by Carolina Márquez and Jayson Chesler, co-founders of Terrainial VR. This documentary film focused on the subject of immigration and border patrol at the US-Mexican border.
One of the most fulfilling VR projects of my career was entitled Sound of the Wisps. This VR edutainment game was shown at the 2017 VR for Good Summit in Washington DC. I was the game’s creative director, storyboard artist, experience designer, music composer, and sound designer.
In Sound of the Wisps, players walked through a forest until happening upon four glowing spheres (the “wisps”). Once players got close, the wisps started playing music and dancing. Their music was spatialized so moving towards a wisp would make that character’s part of the music sound closer. Mechanics were developed so players could interact with the wisps to explore hidden elements of the music.
In addition to working on company projects, I had other responsibilities while serving as Creative Director at Baltu Studios. My primary duty was to understand the vision of the founders and guide the company’s creative efforts towards that vision. Start-up life, however, required wearing many hats. My background made me uniquely suited for learning design and technical writing. As such, I worked alongside CEO Peter Costa to co-author Baltu’s onboarding materials, intern training program, and standard operating procedures (SOP).

Click to enlarge screenshot from Sound of the Wisps VR game














