As I reported in the review, part of my test was an audio interview I had recorded and then phonetically indexed using Fast-Talk. When the interview was done I ran Fast-Talk on the recorded audio, it only took a few seconds to do its job. Then I could search. One thing I remembered my interviewee saying was “Jean Paoli spent four hours showing me XDocs.” I was able to pinpoint that sentence’s location in the recording by typing “four hours” — which is what you’d expect. But I could also, unexpectedly, find it by typing “phor ours” — because it was /phonetic/ search, not text search. It was a hugely clever optimization that bypassed text-to-speech completely.
Some years later I wondered ([link]) why we don’t yet have this capability. I am still wondering. It would be a game-changer in many ways, as anyone who has had to search audio recordings the hard way will attest.
]]>Not with equivalencies, which is what such phrases/sentences containing “is”, which equals “equals”, are.
]]>As a sometime journalist, I’ve run into the difference between what you heard and what was said. People especially believe that they said things that their actual words may not express. That probably has to do with the leap between formulating a thought and articulating it. (My knowledge of linguistic theory is nearly absent, so I’ll stop there.) Also, the listener may hear things differently, depending on the view one holds of that particular person. If one is relying on written notes, this certainly introduces bias, which can come out especially in a summary or analysis. Then begins the finger-pointing.
For these reasons, an actual record of the statement is very useful. I’m grateful to the Chronicle for the careful record that you provide, and the attention to accuracy.
]]>