| |
- detect_pitch(chunk, min_frequency=82.0, max_frequency=1000.0, samplerate=44100.0, sens=0.10000000000000001, ratio=5.0)
- Return the pitch present in a chunk of sampled sound
The chunk should be a numpy array of samples from the soundcard,
in 16-bit mono format. The return value will either be None if no
pitch could be detected, or a frequency in Hz if a pitch was
detected. The chunk should be at least 1024 bytes long for
accurate pitch detection of lower frequencies.
Human vocal range is from about E2 to C6. This corresponds to
frequencies of approx 82-1000 Hz. Middle C is C4 at 261.6 Hz.
Keyword arguments:
min_frequency - minimum frequency to detect (default: 82.0)
max_frequency - maximum frequency to detect (default: 1000.0)
samplerate - sampling frequency of input (Hz) (default: 44100.0)
sens - tuning parameter to avoid octave skipping
(should be between 0.0 and 1.0, default: 0.1)
ratio - how good detected pitch much be before being accepted,
higher numbers are more stringent (default: 5.0)
- loudness(chunk)
- Calculate and return volume of input samples
Input chunk should be a numpy array of samples for analysis, as
returned by sound card. Sound card should be in 16-bit mono mode.
Return value is measured in dB, will be from 0dB (maximum
loudness) down to -80dB (no sound). Typical very loud sound will
be -1dB, typical silence is -36dB.
- midinum_from_pitch(freq)
- Return midi note number from pitch
Midi note numbers go from 0-127, middle C is 60. Given a frequency
in Hz, this function computes the midi note number corresponding to
that frequency. The return value is a floating point number.
- musical_detect_pitch(chunk, min_note=40.0, max_note=84.0, samplerate=44100, sens=0.10000000000000001, ratio=5.0, smooth=1.0)
- Return the pitch present in a chunk of sampled sound
The chunk should be a numpy array of samples from the soundcard,
in 16-bit mono format. The return value will either be None if no
pitch could be detected, or a midi note number if a pitch was
detected. The chunk should be at least 1024 bytes long for
accurate pitch detection of lower frequencies. The return value
will be a floating point number, e.g. 60.5 is half a semitone
above middle C (60).
Human vocal range is from about 40 (E2) to 83 (C6). This
corresponds to frequencies of approx 82-1000 Hz. Middle C is 60
(C4).
Keyword arguments:
min_note - minimum midi note to detect (default: 40)
max_note - maximum frequency to detect (default: 83)
samplerate - sampling frequency of input (Hz) (default: 44100.0)
sens - tuning parameter to avoid octave skipping
(should be between 0.0 and 1.0, default: 0.1)
ratio - how good detected pitch much be before being accepted,
higher numbers are more stringent (default: 5.0)
smooth - how much to smooth output (default: 1.0)
- pitch_from_midinum(m)
- Return pitch of midi note number
Midi note numbers go from 0-127, middle C is 60. Given a note number
this function computes the frequency. The return value is a floating
point number.
|