Thank you for PlayMaker actions! I have a couple of questions.
I don't see an equivalent to 'List Voices', instead we get a number representing the alphabetical list(?). Randomly typing in numbers until I hit what I recognize as 'Zarvox', I have 72 voices loaded on my Mac higher numbers seem to default to Agnes (Voice 0) I can deal with that. Once I find the voice(s) I want to use I can write them down, but: 1. If I add more voices these numbers will change. I can't use a PlayMaker variable (int) to assign the Voice globally.
I would have to update each instance of the Speak action by hand. Is there something I've missed? Could the Voice input be changed to a FsmInt variable? Also could the Text input be changed to a FsmString variable? I think this is going to be necessary for anything complex.edit: I tried to sit with a PlayMaker tutorial to learn how to change the actions to PM variables, but scripting is over my head, tbh. I think essentially we'll need all the inputs to be PM fsm variables. I can think of reasons to control volume and rate, even audiosource game object through PlayMaker.
Thank you for PlayMaker actions! I have a couple of questions. I don't see an equivalent to 'List Voices', instead we get a number representing the alphabetical list(?).
Not able to see TTS voices after Microsoft Speech Platform Install. I'm not seeing any of these newly installed voices show up in Control Panel --> Ease of Acces --> Speech Recognition-->Text to Speech. Only the original voices are available: David and Zira. Can you please help with this? This thread is locked. You can follow the question or vote as helpful, but you cannot reply to this thread.
Randomly typing in numbers until I hit what I recognize as 'Zarvox', I have 72 voices loaded on my Mac higher numbers seem to default to Agnes (Voice 0) I can deal with that. Once I find the voice(s) I want to use I can write them down, but: 1. If I add more voices these numbers will change. I can't use a PlayMaker variable (int) to assign the Voice globally. I would have to update each instance of the Speak action by hand. Is there something I've missed?
Could the Voice input be changed to a FsmInt variable? Also could the Text input be changed to a FsmString variable? I think this is going to be necessary for anything complex.edit: I tried to sit with a PlayMaker tutorial to learn how to change the actions to PM variables, but scripting is over my head, tbh. I think essentially we'll need all the inputs to be PM fsm variables. I can think of reasons to control volume and rate, even audiosource game object through PlayMaker. Click to expand.
Hi wetcircuit Thank you very much for your valuable feedback! The current Playmaker-actions are in an early stage and we're aware that they aren't complete. We don't use Playmaker in our own projects, so we have to learn how you and others will use them.
I already did an improvement: you can now 'debug' all available voices, which should help to find your desired voice. Please send me an with your invoice and I send you the new actions. Currently, I'm on holiday until 16th of May and my actions are fairly limited during this time.
The Internet in South Africa isn't that great So long, Stefan. Click to expand.To use Microsoft Speech Platform RT-Voice would need to integrate the Microsoft Speech Platform (MSSP) SDK found here: Unity app developers would then need to ensure they install/deploy the MSSP runtime and voices (not installed by default with the OS) - which are also available at that same link. The benefit is 1) much better voices and 2) staying in step with MS for Win10 and beyond while also able to run on older Win7 systems as well. You would likely need to add an API method for selecting SAPI 5 vs. MSSP for the Windows use case.
I have some users still on Windows 7 and the only available SAPI 5 voice installed with the OS by default is Anna, and it is very robotic/awful. I understand there are other commercial voices out there, but currently trying to stay with voices provided freely via Microsoft OS's to simplify licensing concerns. To use Microsoft Speech Platform RT-Voice would need to integrate the Microsoft Speech Platform (MSSP) SDK found here: Unity app developers would then need to ensure they install/deploy the MSSP runtime and voices (not installed by default with the OS) - which are also available at that same link. The benefit is 1) much better voices and 2) staying in step with MS for Win10 and beyond while also able to run on older Win7 systems as well. You would likely need to add an API method for selecting SAPI 5 vs.
MSSP for the Windows use case. I have some users still on Windows 7 and the only available SAPI 5 voice installed with the OS by default is Anna, and it is very robotic/awful.
I understand there are other commercial voices out there, but currently trying to stay with voices provided freely via Microsoft OS's to simplify licensing concerns. Click to expand.Thank you for your responses! I did catch on that pause and resume should be done using the Pause and Unpause methods on the audio source.
Still learning my way around Unity. MSSP must enumerate/track the list of available voices differently from SAPI as I do not see the MSSP voices listed when I use your VoicesForCulture method. I am not surprised that the underlying speech synthesizer is the same. I am getting a different behavior with regard to Silence. I am suspicious that within Speaker.cs, line 317, sources is an empy list even though I am passing in a valid audio source as part of my call to Speak.
Therefore, source.Stop and Destory(source) never get called and the voice continues to be heard even after a call to Silence. Thank you for your responses! I did catch on that pause and resume should be done using the Pause and Unpause methods on the audio source. Still learning my way around Unity. MSSP must enumerate/track the list of available voices differently from SAPI as I do not see the MSSP voices listed when I use your VoicesForCulture method. I am not surprised that the underlying speech synthesizer is the same. I am getting a different behavior with regard to Silence.
I am suspicious that within Speaker.cs, line 317, sources is an empy list even though I am passing in a valid audio source as part of my call to Speak. Therefore, source.Stop and Destory(source) never get called and the voice continues to be heard even after a call to Silence. Click to expand.Sort of? All I mean by caching is to have audio files that were previously Speak-generated files be played using the already generated files. That way, after playing the game for a while on one platform, when you move the game to the other OS the audio files already generated in the project will play and keep the game's voice consistent unless a line of new dialog is spoken in which case it would generate from the new OS. In that case, it might be good to be able to specify which OS/voice combo is the desired one for caching so as not to get files cached from the wrong OS voice.
All I mean by caching is to have audio files that were previously Speak-generated files be played using the already generated files. That way, after playing the game for a while on one platform, when you move the game to the other OS the audio files already generated in the project will play and keep the game's voice consistent unless a line of new dialog is spoken in which case it would generate from the new OS. In that case, it might be good to be able to specify which OS/voice combo is the desired one for caching so as not to get files cached from the wrong OS voice.