Jump to content

Michael Eskin

Members
  • Posts

    1,080
  • Joined

  • Last visited

Contact Methods

  • Website URL
    http://michaeleskin.com

Profile Information

  • Gender
    Male
  • Interests
    Traditional Irish Music
    Anglo Concertina
    Uilleann Pipes
    Astronomy
  • Location
    San Diego, CA

Recent Profile Visitors

3,527 profile views

Michael Eskin's Achievements

Heavyweight Boxer

Heavyweight Boxer (5/6)

  1. The WARBL and WARBL2 both support changing octaves via pressure change thresholds and various related hysteresis and timing settings. Or you can assign functions like octave shifts to any of the three buttons on the back. Full details and documentation is here: https://warbl.xyz/documentation.html
  2. I'm just not convinced that it's worth the massive increase in instrument complexity and probably cost to implement. The 3-axis accelerometer used in the WARBL2 is a single tiny chip, and you only need one to implement motion-based MIDI expression for an instrument.
  3. I can't even imagine why you'd want polyphonic aftertouch on a concertina shaped device. The new WARBL2 BLE MIDI wind controller has a 3-axis accelerometer that can be mapped to any kind of MIDI control message flow. I have a couple, having worked with Andrew on the original. That might be something to consider. You might even be able to leverage the WARBL2 firmware, which is open source, to understand how to do the mapping.
  4. You can now specify your chord inversions using either a number or a letter. Here's the information from the User Guide: If you want to specify that a chord in the ABC play with an inversion, you can append a : and then the numbers 0-14 or the letters a-n 0 = no inversion 1 = 1st inversion 2 = 2nd inversion 3 = 3nd inversion (octave up for simple 3-note chords) etc. Or: a = no inversion b = 1st inversion c = 2nd inversion d = 3nd inversion (octave up for simple 3-note chords) etc. Example 1: E Minor chord, first inversion: "Em:1" or "Em:b" Example 2: G chord, second inversion: "G:2" or "G:c" Example 3: D Major chord, one octave higher: "D:4" or "D:d" The inversion values allow up to a full octave transform of the original chord, and then will wrap around to the original chord and subsequent inversions. The inversions also include any octave shifts you might have specified on the %%MIDI chordprog octave=1 style commands for the chord instrument. This style of indicating inversions was inspired by a system described at this Wikipedia page: https://en.wikipedia.org/wiki/Inversion_(music) Note: This extended chord syntax for the inversions may not be compatible with other ABC tools. Demo video: https://youtu.be/QLOaA5WH8E4
  5. I have two C/Gs. My primary C/G instrument is a nice Carroll model, the other is an older Edgley. I take the Edgley when I'm going to play outdoors camping or in other less than ideal conditions. For playing with pipers with flat sets in C and B I also have instruments in Bb/F and A/E.
  6. I've been playing C/G Anglo concertina in the traditional Irish style for about 20 years now. Every so often, I get to meet an Anglo player who plays in the harmonic style and it just kind of blows my mind. Just for a change of pace and the challenge of trying something completely different, I'd love learn a tune or two in that style, something like a Sousa march or a Joplin rag. I don't mind starting off with something challenging. I'm retired. I have time. Does anyone have a PDF for an example or two of a tune in harmonic style with Gary Coover style tab for either of those styles of music they can share?
  7. The samples are quite long, a full bellows worth of each note at medium volume, about 6 seconds each, and they include whatever minor variations in volume my arm muscles might have created at the time I recorded them. Those minor imperfections I think add to the authenticity of the sound. There is no ability to extend the length beyond what's physically possible with the real instrument. There is no looping of the samples. if you need a note longer than what would be possible on the real instrument, you need to rearticulate the note, just as you would on a real instrument. There are limitations to this system: 1) While my iOS apps and ABC Transcription Tools will respond to velocity and volume messages by changing the playback volume, there is no change in timbre based on volume or bellows pressure modulation on sustained notes 2) No change in attack transient behavior or timbre based on velocity But even with those limits, the samples have served me well over the last decade or so for multiple applications, but I'm not building applications for professional use, primarily mobile apps for silent practice and for use as a instrument in my ABC Transcription Tools. Mobile examples from over a decade ago: Use of the samples more recently as an instrument for my ABC Transcription Tools:
  8. I mean exactly what I wrote. The attack transients are included in the per-note samples. Each note on my instrument is sampled to an .wav file and played based on an incoming MIDI trigger for my apps that are MIDI enabled or a screen touch on the live play apps. As I recorded these samples over a decade ago, my memory of exactly the process I used is a bit fuzzy. Most likely if I followed the same practice as I do today when sampling new instruments, the samples start when the note waveform first is non-zero as displayed in Adobe Audition, includes the attack transient, and represents the sound recorded when I played each button on my instrument. The post processing of the samples is mostly about balancing the volume across the entire instrument as well as micro-tuning of any notes that might not be perfectly in tune. If you're asking do the apps or samples model the differential reed start latency, no, there is nothing specific in the code that does that, it's assuming that's inherent in the note samples attack transients. My per-button concertina samples are used both in my iOS apps as well in my free web-based ABC Transcription Tools.
  9. We're not talking about startup reed delay here, we're talking about total delay from controller switch closure to first sample out of the audio system. When I create my sample sets for my iOS and web apps for concertina and several other instruments, I record and edit every single note on my own instruments individually, and then edit the samples to provide the most realistic playback experience. It's quite easy to add additional startup time to the start of low reed samples to simulate startup delay, but that's more specific to the properties the instrument being modeled, not the end-to-end MIDI triggering and playback system, which is I think the primary topic here. You can construct the instrument samples any way you want to make them sound and feel more authentic when playing, either building-in reed startup latency or not, the key is you don't want to have excessive delay in the triggering and rendering system playing that sample.
  10. I'm defining total system latency from when the switch closure on the controller device to when the first sample of the sound produced by whatever the target sound module device is arrives at the ears of someone sitting within a couple of feet of the speakers. More than about 30 ms of this kind of latency and the whole system starts to feel sluggish and laggy. Any additional delay because of, for example, low reed startup time, would be part of the audio sample for that note, assuming a per-note sampled instrument, such as those I use in my concertina and other apps.
  11. Latency between successive notes would include BLE latency and computation and buffering latency in the sound module plus audio output hardware subsystem buffer latency on whatever devices you're using. In my experience the audio subsystem latency is generally much longer than BLE related latency. Agree that <25ms should be the target total system latency.
  12. From the WARBL2 manual section on BLE MIDI latency: BLE latency BLE MIDI adds a small amount of latency (delay). On most devices this is imperceptible or nearly so, but for the absolute lowest latency you may choose to use USB MIDI instead. On Windows, Mac, and Android, the average added BLE latency is usually around 4 milliseconds (maximum of 7.5 milliseconds). On iOS devices the average added latency is around 8 milliseconds (maximum of 15 milliseconds), but iOS devices also have relatively low audio latency so this difference may be largely offset. For comparison, 8 milliseconds of latency is the amount of time needed for sound to travel 8 feet, so it is the equivalent of standing 8 feet away from an audio source. In fact, for keeping perceived latency to an absolute minimum it is recommended to be close to your audio source (speakers) because just standing several feet from speakers will effectively double the overall latency. Having the WARBL2 connected to the Configuration Tool may also add a very small amount of latency because it necessitates sending additional data wirelessly. You can click “Disconnect” in the Configuration Tool when done using it to avoid this. Note that there are many other sources of latency. Some Android devices may have high audio latency. Also, all Bluetooth headphones and ear buds have very high latency and these should be avoided.
  13. If you don't have a sound module actually built into the MIDI concertina and relying on external sound module (either hardware MIDI or virtual instruments), then BLE MIDI is the way to go to transmit the data to either a computer or mobile device with extremely low latency, generally below 10 ms. This is what the new WARBL2 wind controller uses for it's MIDI data streams. Personally, I'd prefer a MIDI concertina that was just a controller and didn't have built-in sounds, and then use any of the many sound module apps available on both iOS and MacOS as my sound modules.
  14. They are all compatible with the use of appropriate dongles. Some are passive, some are active. USB-C has the advantage of being about the same size as Micro-USB and Lightning connectors, supporting higher charging currents, is physically symmetrical (like Apple's Lightning connectors), because of EU and other pressures, rapidly becoming the standard for pretty much any portable device that either needs to be charged or connect to a computer. On my Mac Studio, it has USB-C, Thunderbolt 4, and USB-A connectors, with the USB-A connectors using USB-3 high-speed signaling, but compatible all the way back to original USB signalling for low speed peripherals like mouse and keyboards. Both USB-C and Thunderbolt 4 use the same connector, Thunderbolt 4 supports USB signalling as well. https://support.apple.com/guide/mac-studio/take-a-tour-apd0fd69f4be/mac
×
×
  • Create New...