This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
I was playing with a karaoke application and came up with following questions:
1. The application allowed its users to control the volume of the artist; even mute it. How is this possible?
I know that every sound is represented by a frequency. So does adjusting artist sound/setting equalizer etc. mean performing some transformation of required frequencies?
2. The application recorded users voice input via a mic. Assuming that the sound is recorded in some format, the application was able to mix the recording with the karaoke track(with artists voice muted). How can this be done?
Did they play both the track and voice recording simultaneously? Or maybe they inserted additional frequency(channel?) in the original track, maybe replaced it?
What sort of DSP is involved here? Is this possible in Java?
I am curious and if you have pointers to documents or books that can help me understand the mechanism here, please share.
I think with a karaoke machine the voice is on its own track so there is no need for DSP.
If you interested in java and sound, check out the site http://jsresources.org/ It has example code and explains how the java api works.
It seems to me that there were two separate mp3s involved: an instrumental and a voice track. Both of these were played concurrently ,maybe, using the AudioSession APIs in iPhone (SoundPool in Android?). The audio was recorded from the microphone and when the user chooses preview, they were playing the recorded audio in sync with the instrumental track.
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com