Jun 26, 2011 at 10:26 PM

Post anything else here

Jun 29, 2011 at 3:18 PM
Edited Jun 29, 2011 at 3:19 PM

This is an excellent library.  Thank you.  I've updated the instrumentbank to give back the "name" of the instrument being loaded.  That way, the GAME1.cs program can display all the instruments and allow user to pick one.

Thank you again and keep working at this.

Jun 29, 2011 at 9:04 PM
Edited Jun 29, 2011 at 9:05 PM

Thank you for posting =)

Keeping track of string names for the instruments is an excellent idea.

At the time when I wrote the bank manager I decided to do the minimum and let the instruments control their samples.

I think in the future I may have the BankManager keep track of the samples themselves to prevent the loading of duplicate samples which is already a problem with many of the instruments.

For example you may notice that many of the drums use the same samples...

Jun 29, 2011 at 9:29 PM
Edited Jun 29, 2011 at 9:30 PM

Thank you for your reply.  Please make NoteOn(chn, note, vel) and NoteOff(note, chn) have their parameters the same order, meaning NoteOff(chn, note).  It is easier to remember.

Also, in LoadMidiFile, change the parameter to file name only and manage the stream in the loading routine, instead of taking Stream as the parameter.  That is easier on the calling routine.

Keep updating please.  I am working on a "piano" class for the demo.  I'll keep you posted.


Jun 30, 2011 at 1:34 AM
Edited Jun 30, 2011 at 2:12 AM

"Please make NoteOn(chn, note, vel) and NoteOff(note, chn) have their parameters the same order..."

-I have actually done this already, but haven't updated the release. Today or tomorrow I will have it up.

Some new things that I have done are:

1. Added an attack & release modifier

(I found that the sound-bank release & attack values are a bit off especially during fast notes like drum hits)

2. Added public property Name to abstract Instrument so instruments can be given names.

(if you wanted to get an instrument's name for example: InstrumentBank.getInstrument(20,true).Name)

3. Added documentation to the Synthesizer class

As for the midi class, I wrote that before I was aware of the compact framework's file loading methods so I figured a stream would be a good place to start since I knew Streams were supported.

I thank you for your interest and if you would like to become a dev for this project let me know.

Jun 30, 2011 at 1:35 AM

There are quite a few things that are above me that I hoping some people can help me with in order to make this

project better, so if anyone's interested...

Re-sampling- Originally my code could only re-sample in int factors (say 44100 to 22050), but I would like to get other coversions working as well.

Pitch& tune- Am I doing this correctly???

Multi-threading- Is keeping the sequencer on another thread correct? Am I locking the correct portions of code?

Jul 12, 2011 at 2:44 PM

I am trying to figure out what you are doing in the CompactMidiFramework.cs ...  Do you have any documents regarding this?  Do you use "CombineTracks" to sort out all events by time?  I am trying to figure out the sequencer.  May be you have an updated version now?

Keep up the good work.


Jul 13, 2011 at 3:48 AM

The CompactMidiFramework class was one of the first classes I wrote for this project, In fact I used it in a fluidsynth port I did for c#.

I haven't got any documentation, but I would be glad to produce some in time.

There are many ways to write a sequencer, but here is the quick version of how mine works:

1. First I load my data and process it going through the midi standard for events and such. (not all events in the midi standard are implemented)

2. When the midi events are finished loading they are contained in 1 or more tracks depending on the midi type. (these tracks could be based on midi channel, instrument, or even different songs.)

3. Since I knew timing would be an issue, especially in the compact framework, (actually it still is), I opted for a single threaded sequencer.

4. So instead of going through each track on a different thread I would combine them which is a perfectly legit sequencing technique.

5. However the "sort" done in CombineTracks is where my sequencer perhaps doesn't follow normal practice.

6. What happens in CombineTracks is that the DeltaTime gets converted from an Absolute time into a Difference of time and then sorted.

-Most sequencers will use this absolute time along with a precise timer that constantly counts ticks to decide when events should be processed.

-My sequencer does the opposite it takes the event first and then uses the timer to wait the delta time giving other threads a chance to run.

- I felt that this approach would yield more accurate results when it came to timing in the long run and also be less CPU intensive.

Here is a quick example of how the midi data is processed:

Original Midi Event Data

Track 1                        |   Track 2

NoteOn-DeltaTime 0   |  ProgramChange-DeltaTime 0

NoteOff-DeltaTime 5   |  PitchBend-DeltaTime 2

NoteOn-DeltaTime 6   |  TextEvent-DeltaTime 6

NoteOn-DeltaTime 10 |  NoteOffAll-DeltaTime 11

Midi Data After Processing

Track 1

NoteOn-DeltaTime 0

ProgramChange-DeltaTime 0

PitchBend-DeltaTime 2

NoteOff-DeltaTime 3

NoteOn-DeltaTime 1

TextEvent-DeltaTime 0

NoteOn-DeltaTime 4

NoteOffAll-DeltaTime 1

The reason I use the CombineTracks in the sequencer instead of the midi constructor is in-case of situations in which you want to preserve the format of the midi for editing or just printing out event information.

I hope this cleared things up and I am sorry if I have confused you.

Continue to ask questions though if you have them.

Jul 13, 2011 at 1:15 PM

Thank you for your reply and explanation.  I like your framework, because it is very simple.  I've written MIDI apps a few times using VB.net and C++.  I've since come to C# and XNA for the past 2 years.  I like XNA because it allows for better graphics handling. 

I am trying out 2 MIDI packages : (1) Sanford's MIDI tool kit (this has excellent timing but VERY complicated and hard to figure out).  (2) Tom Lokovic's MIDI.net with a very good scheduler for play back (I have not yet played with it).  I am going to use your framework with MIDI.net and its scheduler.  

I eventually want to write my own sequencer, why?  Because, it is a problem that I don't understand and want to solve :-)

Keep up the good work



Jul 13, 2011 at 7:30 PM
Edited Jul 13, 2011 at 7:31 PM

Yes I wrote sequencers before I started trying out synth's and I found Sanford's projects very useful as well.

The way the sequencer handles the midi data is generally not as important as it's timing mechanism, in fact I have a sample project that uses my same sequencer, but with access to the multimedia timing api and it works beautifully.

Perhaps this project will help you out, even if you don't like my sequencer you can still use the fluidsynth wrapper as a base for your sequencer.

Beware you may need to download the VC++ 2008 redistributable.



Here's the link: download

Jul 14, 2011 at 1:58 PM

Thanks for the down load.  The sound is so different (I like it).  I was not aware of fluid synth.  I have also used Toub MIDI as well. 

I basically use MIDI to learn music (Piano keys).  In the process, I came across sequencers.  So, I am trying to reverse-engineer other programs to learn the timing of MIDI events.  Your XNA Midi opened up another chance to learn something new.  Are you using fluid synth in XNA Midi as well?

Jul 14, 2011 at 4:21 PM

I thought you would like it. =)

However it's an entirely different thing compared to my synth, fluidsynth is a wavetable synthesizer programmed in C that follows the SF2 standard.

These SF2 files are wavebanks with all the information about how to play the wave samples back and alot more. If your like me and love

to listen to midi's in your spare time I recommend a good SF2 player like SynthFont . The sound is very good especially if you can find the right SF2 file

(I have some that are over 200 megs).


To answer your last question, no, in order to maintain the ability to run on the Xbox 360 I cannot use fluidsynth, but I was thinking about one day implementing the SF2 standard.

Jan 20, 2012 at 1:04 AM

How are the channels set up? For example channel 0 to use instrument 6?

Jan 22, 2012 at 4:57 AM
Edited Jan 22, 2012 at 5:04 AM

Sorry for my lack of documentation. It seems when I look back I always end up editing/adding code instead of documenting.

As for your question about the channels, You should look at the sequencer class. The synthesizer only cares about the channel for tracking purposes.

In the Sequencer there is an integer array "currentPrograms[]" which I use to keep track of the current instrument each channel is using.

So to change the instrument manually on channel 0 to use instrument 6 assuming instrument 6 exists you would do this:


currentPrograms[0] = 6;


By chance are you trying to change the instruments during midi playback?

If so remember that if the midi has a program change event on that channel your instrument will be overwritten. (most midis will do this at least once, In fact that is how instruments are first set when a midi is played.)

If you really want to change the instrument on the channel you will have to go through the midi events when they are loaded and edit the program change event to the instrument you want.

I might make this process simpler in the future though in another release.

Thank you for your question.