You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(17) |
Jul
(22) |
Aug
(14) |
Sep
(9) |
Oct
|
Nov
(9) |
Dec
(2) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2009 |
Jan
(15) |
Feb
|
Mar
|
Apr
|
May
(8) |
Jun
(9) |
Jul
(3) |
Aug
(9) |
Sep
(5) |
Oct
(7) |
Nov
(2) |
Dec
(21) |
| 2010 |
Jan
(10) |
Feb
(5) |
Mar
(6) |
Apr
(8) |
May
(32) |
Jun
(29) |
Jul
(7) |
Aug
(9) |
Sep
(13) |
Oct
(1) |
Nov
(7) |
Dec
(15) |
| 2011 |
Jan
(10) |
Feb
(37) |
Mar
(18) |
Apr
(42) |
May
(24) |
Jun
(12) |
Jul
(16) |
Aug
(32) |
Sep
(12) |
Oct
(18) |
Nov
(26) |
Dec
(6) |
| 2012 |
Jan
(12) |
Feb
(9) |
Mar
(24) |
Apr
(18) |
May
(18) |
Jun
(11) |
Jul
(6) |
Aug
(6) |
Sep
(19) |
Oct
(4) |
Nov
(9) |
Dec
(36) |
| 2013 |
Jan
(20) |
Feb
(8) |
Mar
(91) |
Apr
(40) |
May
(32) |
Jun
(7) |
Jul
(3) |
Aug
(11) |
Sep
(32) |
Oct
(24) |
Nov
(60) |
Dec
(16) |
| 2014 |
Jan
(18) |
Feb
(13) |
Mar
(10) |
Apr
(8) |
May
(49) |
Jun
(62) |
Jul
(107) |
Aug
(26) |
Sep
(40) |
Oct
(14) |
Nov
(26) |
Dec
(27) |
| 2015 |
Jan
(37) |
Feb
(5) |
Mar
(48) |
Apr
(5) |
May
(30) |
Jun
(14) |
Jul
(13) |
Aug
(22) |
Sep
(17) |
Oct
(15) |
Nov
(6) |
Dec
(131) |
| 2016 |
Jan
(90) |
Feb
(33) |
Mar
(17) |
Apr
(47) |
May
(17) |
Jun
(46) |
Jul
(5) |
Aug
(36) |
Sep
(46) |
Oct
(20) |
Nov
(12) |
Dec
(23) |
| 2017 |
Jan
(17) |
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(2) |
| 2020 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Silas S. B. <ss...@ca...> - 2023-04-24 11:07:56
|
Hi Andres, I'm a bit confused by your message. The only version of Linux I can find that's called "OpenLinux" is Caldera OpenLinux, which was published between 1995 and 1996 and didn't run on ARM, so I don't think that's what you're running on that Cortex-A5. It might help if you can link to the website of the version of Linux you mean, because the word "openlinux" is not enough for us to find it. The first thing to check is, does that version of Linux have any documentation about how you're supposed to compile things for it? Before dealing with the Windows environment, I'd want to see if I could compile something for the system from whatever environment is described in that documentation. For example, if the documentation says how you can cross-compile from x86 GNU/Linux, then see if you can either borrow an x86 GNU/Linux computer, or set up a virtual machine running on your Windows to emulate x86 GNU/Linux, and see if you can cross compile from that environment as a starting point. You might even be able to run the target Linux directly under QEMU and compile things on it directly, or you might even be able to run a compiler on your actual target device if it has enough memory to do so. That is probably not your ideal long-term solution, but at least doing it once will tell us if the problem you are having is that of compiling eSpeak for the device in general, or if it's only a problem with doing it from a Windows environment. Cross compiling from Windows to Linux is difficult, as few developers ever do it. GNU/Linux enthusiasts tend to prefer working under GNU/Linux, and if they want to cross-compile anything they will want to cross-compile from GNU/Linux to Windows, not the other way around. If the Windows machine has enough free disk space and memory then you almost might as well just set up a GNU/Linux virtual machine on it and figure out how to make your Windows build scripts be able to automatically log in to the virtual machine, run the compiler on it, and get the resulting binary out of it. I expect the effort required to set that up is less than the effort required to figure out how to actually cross-compile from Windows, unless the version of Linux you're using has specifically been designed to allow cross-compiling from Windows, in which case we still need to check its documentation about exactly which Windows toolsets and compiler chains it assumes you should have. This is not really a problem specific to eSpeak; you will have this problem no matter what software you want to compile for that chip. I'm wondering if you really need a .a file; it might be easier just to bring parts of the eSpeak code directly into your project, if your project is written in C. Depends on how that version of Linux works. As eSpeak is licensed under the GPL, you will also have to GPL-license the source code of anything you make that uses it as a library, if you distribute the binary in any product. Thanks. Silas -- Silas S. Brown http://ssb22.user.srcf.net "A lover of silver will never be satisfied with silver" - Ecclesiastes 5:10 |
|
From: Andres M. <and...@gm...> - 2023-04-19 15:43:02
|
Grettings, it you say its sad, i read about the easpek-ng and my choice its about the code of the originall espeak i see more simple, in this form i reach to compile but requires many dependences i test to compile the library speak-ng, the proyect is to port to a SOC in the quectel EC200U chip, based on architechture arm cortex-a5 runing openlinux and i need to compile with cross architechture from a windows environment, and i only need the synthesis of spanish, no need modify the pitch or attributes of the voice or change another attributes, only want the system sinthesizes and on my own function dump on a wav file and i have my own player developed for wav files. Very thanks for response. |
|
From: Andres M. <and...@gm...> - 2023-04-19 15:32:18
|
Grettings, it you say its sad, i read about the easpek-ng and my choice its about the code of the originall espeak i see more simple, in this form i reach to compile but requires many dependences i test to compile the library speak-ng, the proyect is to port to a SOC in the quectel EC200U chip, based on architechture arm cortex-a5 runing openlinux and i need to compile with cross architechture from a windows environment, very thanks for response. |
|
From: Brother T. C. <tim...@me...> - 2023-04-18 14:12:26
|
Good morning all. I don't know why I've been getting your emails but, please remove me from your list. Thanks. In Christ's Service. Timothy. Federal Aviation Adminastration Safety Team Service Provider. Sent From My iphone! > On Apr 17, 2023, at 17:55, Silas S. Brown <ss...@ca...> wrote: > > Hi Andres, this mailing list was set up to discuss the original > eSpeak by Jonathan Duddington, who, sad to say, has passed away. As > it was not possible to take over the project without Jonathan's > credentials, the new developers have started a separate project > called eSpeak NG, where NG means Next Generation. eSpeak NG gets > more developer attention than old eSpeak, since it is no longer > possible for old eSpeak to be changed. So you might like to try > heading over to the eSpeak NG project at > https://github.com/espeak-ng/espeak-ng and see if they can help. > > (I worked with Jonathan to improve some aspects of the > original eSpeak, and I have a language-learning program that > still bundles the original eSpeak on some platforms. But > nowadays I find myself just occasionally replying to questions > like yours to let people know that all the active development > has moved to the separate eSpeak NG project.) > > Thanks. > > Silas > > -- > Silas S. Brown http://ssb22.user.srcf.net > > "What is now proved was once only imagined" - William Blake > > > _______________________________________________ > Espeak-general mailing list > Esp...@li... > https://lists.sourceforge.net/lists/listinfo/espeak-general |
|
From: Silas S. B. <ss...@ca...> - 2023-04-17 21:54:33
|
Hi Andres, this mailing list was set up to discuss the original eSpeak by Jonathan Duddington, who, sad to say, has passed away. As it was not possible to take over the project without Jonathan's credentials, the new developers have started a separate project called eSpeak NG, where NG means Next Generation. eSpeak NG gets more developer attention than old eSpeak, since it is no longer possible for old eSpeak to be changed. So you might like to try heading over to the eSpeak NG project at https://github.com/espeak-ng/espeak-ng and see if they can help. (I worked with Jonathan to improve some aspects of the original eSpeak, and I have a language-learning program that still bundles the original eSpeak on some platforms. But nowadays I find myself just occasionally replying to questions like yours to let people know that all the active development has moved to the separate eSpeak NG project.) Thanks. Silas -- Silas S. Brown http://ssb22.user.srcf.net "What is now proved was once only imagined" - William Blake |
|
From: Andres M. <and...@gm...> - 2023-04-14 20:04:33
|
Grettings,i find a library .a but not found and i think is needed compiled them but need help to compile the library .a in a windows environment, i succesfully compile from the dll to a lib library in visual studio but need for port a linux environment. |
|
From: Michael Cesarz-S. <ces...@gm...> - 2021-07-06 14:39:30
|
Hi all, Sorry Silas, i think i wrote to your private address due to a mistake with thunderbird. Here is my mail again: Hi Silas, I wonder how to find out, which version of espeak i installed. However I can give it a try with the newer one you mentioned. I do not think the two versions cannot be installed on one machine togehter. Best regards Michael Am 05.07.2021 um 18:13 schrieb Silas S. Brown: > Hi Michael, unfortunately I don't know if there is a way to tell Jaws > to list voices in a particular order. Most blind Windows users I know > prefer NVDA instead of Jaws. But it might be a limitation of the > Windows Speech API. (To be honest I prefer GNU/Linux to Windows.) > > But the main thing I want to say is, this mailing list was set up to > discuss the original eSpeak by Jonathan Duddington, who, sad to say, > has passed away. As it was not possible to take over the project > without Jonathan's credentials, the new developers have started a > separate project called eSpeak NG, where NG means Next Generation. > You might have already installed the NG version of eSpeak. > Anyway, eSpeak NG gets more developer attention than old eSpeak, > since it is no longer possible for old eSpeak to be changed. > So you might like to try heading over to the eSpeak NG project > athttps://github.com/espeak-ng/espeak-ng and see if they can help. > > (I worked with Jonathan to improve some aspects of the > original eSpeak, and I have a language-learning program that > still bundles the original eSpeak on some platforms. But > nowadays I find myself just occasionally replying to questions > like yours to let people know that all the active development > has moved to the separate eSpeak NG project.) > > Thanks. > > Silas > |
|
From: Silas S. B. <ss...@ca...> - 2021-07-05 16:33:23
|
Hi Michael, unfortunately I don't know if there is a way to tell Jaws to list voices in a particular order. Most blind Windows users I know prefer NVDA instead of Jaws. But it might be a limitation of the Windows Speech API. (To be honest I prefer GNU/Linux to Windows.) But the main thing I want to say is, this mailing list was set up to discuss the original eSpeak by Jonathan Duddington, who, sad to say, has passed away. As it was not possible to take over the project without Jonathan's credentials, the new developers have started a separate project called eSpeak NG, where NG means Next Generation. You might have already installed the NG version of eSpeak. Anyway, eSpeak NG gets more developer attention than old eSpeak, since it is no longer possible for old eSpeak to be changed. So you might like to try heading over to the eSpeak NG project at https://github.com/espeak-ng/espeak-ng and see if they can help. (I worked with Jonathan to improve some aspects of the original eSpeak, and I have a language-learning program that still bundles the original eSpeak on some platforms. But nowadays I find myself just occasionally replying to questions like yours to let people know that all the active development has moved to the separate eSpeak NG project.) Thanks. Silas -- Silas S. Brown http://ssb22.user.srcf.net |
|
From: Michael Cesarz-S. <ces...@gm...> - 2021-07-01 18:34:35
|
Hello there, My name is Michael and I have a question conserning the installation parameters de+m1 de+m2 and so on. I choose to install the espeak windows version on a win 10 system. I set all these above-mentioned paratmeters for different languages. I set the parameters as follows: de de+m1 (and then all possibilities with m. Afterwards all possibilities with f and then whisper croack, and all the klatt-variants. This I choose for five languages I think. If I now try to use the voices with Jaws I come upon the following: The voices seem to be there, it is difficult to say, as there are many of them. However, they are not in a logical order. Did I anything wrong, or is it not possible to keep the order of the typed parameters? I hope my english was understandable. Best regards Michael |
|
From: Valdis V. <val...@od...> - 2020-03-12 14:56:14
|
Hi, Bhavya, my answers are below your questions: > * The variant files don't seem to have any extension. Which editor > should I open them with? Currently viewing a lot of numbers when > opening one of the variant files using Notepad. Some of these files use Unix (LF), some Windows (LF, CR) linebreaks. If your notepad is not very new, it will not handle Unix linebreaks properly. Use e.g. Notepad++ > * I would assume that if these numbers are meaningful, they depict > different parameters of a voice. Is there some reference on what > these > different properties of speech are, so that I know what aspect of the > speech I am tweakingg when changing any number? https://github.com/espeak-ng/espeak-ng/blob/master/docs/voices.md > * Any other tips or thoughts on creating new ESpeak-NG variants and > refining existing one would also be useful. https://github.com/espeak-ng/espeak-ng/blob/master/docs/contributing.md > I would greatly appreciate any assistance. > > Best Regards, > Bhavya Good luck! Valdis |
|
From: Bhavya s. <bha...@gm...> - 2020-03-12 11:37:51
|
Dear all, I am currently looking at all the ESpeak-NG variant files in C:\Program Files (x86)\NVDA\synthDrivers\espeak-ng-data\voices\!v. I was hoping to dip my feet in editing and improving ESpeak-NG variants in the coming days, and see what I might be able to come up with. * The variant files don't seem to have any extension. Which editor should I open them with? Currently viewing a lot of numbers when opening one of the variant files using Notepad. * I would assume that if these numbers are meaningful, they depict different parameters of a voice. Is there some reference on what these different properties of speech are, so that I know what aspect of the speech I am tweakingg when changing any number? * Any other tips or thoughts on creating new ESpeak-NG variants and refining existing one would also be useful. I would greatly appreciate any assistance. Best Regards, Bhavya |
|
From: Valdis V. <val...@od...> - 2019-12-08 19:09:33
|
eSpeak NG is community driven project and its contributors develop only text-to-speech functionality. Passing text from screen is responsibility of screen readers which use API provided by eSpeak NG. Particularly, if chromevox screen reader handles right-to-left writing (e.g. Arabic) correctly, then (probably) missing thing for Hebrew is configuration files in eSpeak NG. Please look at: https://github.com/espeak-ng/espeak-ng/blob/master/docs/contributing.md and particularly at: https://github.com/espeak-ng/espeak-ng/blob/master/docs/add_language.md If it helps, I can create initial settings for Hebrew language, but further improvements are up to native speakers of the language. Valdis > I'm using a Chromebook and would love to have a Hebrew voice for it. > in order for me to do any kind of studying and or real using of said > language, a screen reader must first be able to read it and Chromevox > refuses to do so. > I'm hoping either Chromevox developers will come through on this or > ESpeak developers. > it is crucial that one be developed. > how are blind Israelis expected to use Chromebooks and other devices > where the screen reader or readers won't read their native language? > to pay for a screen reader just sounds like a waste when screen > readers are made and placed for free on certain machine's like > Chromebooks for example. > apple provides a free hebrew voice for all voiceover users and I > feel that many screen reader developers are severely lacking in this > department. > thoughts? > > in Christ's Service. > Timothy. > Sent from my iPhone > _______________________________________________ > Espeak-general mailing list > Esp...@li... > https://lists.sourceforge.net/lists/listinfo/espeak-general |
|
From: Brother T. C. <tim...@me...> - 2019-12-07 05:39:07
|
I'm using a Chromebook and would love to have a Hebrew voice for it. in order for me to do any kind of studying and or real using of said language, a screen reader must first be able to read it and Chromevox refuses to do so. I'm hoping either Chromevox developers will come through on this or ESpeak developers. it is crucial that one be developed. how are blind Israelis expected to use Chromebooks and other devices where the screen reader or readers won't read their native language? to pay for a screen reader just sounds like a waste when screen readers are made and placed for free on certain machine's like Chromebooks for example. apple provides a free hebrew voice for all voiceover users and I feel that many screen reader developers are severely lacking in this department. thoughts? in Christ's Service. Timothy. Sent from my iPhone |
|
From: Valdis V. <val...@od...> - 2019-09-16 15:17:58
|
Espeak is text oriented. If you will pass some binary data, it will use it as input of characters according to encoding schema of the operating system (e.g. UTF-8). So, to tell value of binary data you have to convert it to text, using e.g. xxd or other decoder: echo -e '\xff\x51' |xxd -p| espeak That will tell binary values as hexadecimal numbers. As it was told in another response eSpeak project is inactive since disappearance of Jonathan Duddington. Active development and support now happens in eSpeak NG project https://github.com/espeak-ng/espeak-ng maintained by Reece H. Dunn. Mailing list of espeak-ng project is https://groups.io/g/espeak-ng Valdis > if you run xev | grep keysym and press keys you can see the actual > hex codes > > But I cannot echo them using > echo -e '\xff\x51' | espeak > > can someone help?; I think it would be useful especially with > vokoscreen and making software tutorials for youtube. ie type espeak > and all keypresses are communicated for a JAWS alternative or screen > reader like affect. > > > ff51 left > ff52 up > ff53 right > ff54 down > > Allso it would be cool to add a parameter to prefix espeak data work > like add sed > maybe something like > alias prespeak='$1 | espeak" > prespeak sed -e 'y/aeiou/eioua/g' > > If you could get the XARGS or make -C or -c option to insert a sed > -e > script perhaps? > > also what was the keys program for right clicking on text to send to > espeak to make a linux hotkey? > > > > 2019-07-15 18:44 Гринуич, Ember AR Leona <emb...@gm...>: > > > > Hi does anyone know any attorneys? -openInvent.club > > > > Please code keyfile and Periphery Password Protocol instead of the > > normal password protection it uses the FilePath and Checksum(s) as > > password. > > > > > > _______________________________________________ > > Espeak-general mailing list > > Esp...@li... > > https://lists.sourceforge.net/lists/listinfo/espeak-general > > > > _______________________________________________ > Espeak-general mailing list > Esp...@li... > https://lists.sourceforge.net/lists/listinfo/espeak-general |
|
From: Silas S. B. <ss...@ca...> - 2019-09-16 13:50:32
|
Hi, unfortunately this list is no longer active because the original author of eSpeak, Jonathan Duddington, has been uncontactable for several years. But there are some developers working on a next-generation version of eSpeak called eSpeak NG, so if you are interested in further eSpeak development then you might like to try eSpeak NG and join the eSpeak NG community instead. They are using the following address: https://groups.io/g/espeak-ng/ With regards to reading out keypresses etc, it's probably best if you simply include the appropriate sed command in the pipeline yourself, rather than seeking to have it integrated into the main espeak program. I would suggest using sed -u (unbuffered); see "man sed" for details. Silas -- Silas S Brown http://people.ds.cam.ac.uk/ssb22 |
|
From: ̵tImposterMO̵ t̵ <emb...@gm...> - 2019-09-10 20:23:45
|
if you run xev | grep keysym and press keys you can see the actual hex codes But I cannot echo them using echo -e '\xff\x51' | espeak can someone help?; I think it would be useful especially with vokoscreen and making software tutorials for youtube. ie type espeak and all keypresses are communicated for a JAWS alternative or screen reader like affect. ff51 left ff52 up ff53 right ff54 down Allso it would be cool to add a parameter to prefix espeak data work like add sed maybe something like alias prespeak='$1 | espeak" prespeak sed -e 'y/aeiou/eioua/g' If you could get the XARGS or make -C or -c option to insert a sed -e script perhaps? also what was the keys program for right clicking on text to send to espeak to make a linux hotkey? 2019-07-15 18:44 Гринуич, Ember AR Leona <emb...@gm...>: > Hi does anyone know any attorneys? -openInvent.club > > Please code keyfile and Periphery Password Protocol instead of the > normal password protection it uses the FilePath and Checksum(s) as > password. > > > _______________________________________________ > Espeak-general mailing list > Esp...@li... > https://lists.sourceforge.net/lists/listinfo/espeak-general > |
|
From: Ember AR L. <emb...@gm...> - 2019-07-15 18:44:41
|
Hi does anyone know any attorneys? -openInvent.club Please code keyfile and Periphery Password Protocol instead of the normal password protection it uses the FilePath and Checksum(s) as password. |
|
From: Ember AR L. <emb...@gm...> - 2019-07-11 00:47:54
|
Hello I founded openInvent.club. It may become an award. I think you guys are great I like espeak alot. I want to use espeak in music specifically with LMMS. So I would like 1 second samples for 60 120 180 or 240 BPM music. Or perhaps even smaller sized samples for beat mashing. I tried this: espeak "I am saying zor the worlds stupid longest phrasde crap of a computer bot speech sin thesis liar machine code did crap poo E lal alallal whheee just kidding" --split=0.01666 -w out.wav after I tried the -w befor the split attribute. Can you change the split implementation to allow for fraction of a second/minute wav output? Do you have any VST coding experience? This would be even better if implemented with a midi input for pitch manipulation. Or some way to interface with AT1 Auto Tune live. Thanks for your time, guys. Sincerely, Ember Autumn Rose Leona PS I need a lawyer see vibrochat.com and read txt files at tiny.cc/openInventCase |
|
From: Silas S. B. <ss...@ca...> - 2019-05-20 12:04:58
|
Dear Senthil, unfortunately this list is no longer active because the original author of eSpeak, Jonathan Duddington, has been uncontactable for several years. But there are some developers working on a next-generation version of eSpeak called eSpeak NG, so if you are interested in further eSpeak development then you might like to try eSpeak NG and join the eSpeak NG community instead. They are using the following address: https://groups.io/g/espeak-ng/ I notice from your screenshot that you already have eSpeak NG installed on your system. I don't know if there will be conflicts if you try to install both the original eSpeak and also the new eSpeak NG on the same system; not many people have tested that. I don't use Windows myself, but I seem to remember the eSpeak Windows installer does let you choose languages at install time. You may have to enter their short codes, which will be zh for Chinese Mandarin and zhy for Chinese Cantonese. (If you don't know which type of Chinese you want, then you probably want Mandarin.) Don't forget to also install the extra dictionary data so it will correctly read more Chinese words. But please note that, when I was helping to improve the Chinese Mandarin voice, my level of Mandarin was not as good as it is now, so the voice has a strong British accent. I eventually gave up trying to fix the accent and instead helped with a different project called eGuideDog with its Ekho TTS; we used recorded syllables, which take more disk space and are less flexible, but at least they are Chinese-native sounds recorded by my Chinese friends. So you might like to try installing Ekho TTS into SAPI and seeing if it can be used with NVDA. I have not tried this myself, as I prefer GNU/Linux. Another thing you can try is the Microsoft Chinese voice, which I believe is called Huihui in Windows 10, and is better than the previous Microsoft Lili voice although it is still a unit-selection voice which means there are going to be glitches when you give it text that's a poor match for its training set. I hope you are able to resolve the problem for your Chinese user. Best wishes. Silas PS: If you happen to know who designed your company's website spi-global.com, you might like to let them know that there are a few problems for sight-impaired users on that design, for example in the default stylesheet there is poor contrast on the "read more" links, as well as it not being obvious that more content is available by scrolling down and the slideshow moves by itself before slow readers have finished reading each slide. There is also heavy use of capital letters, where it might be better to use "text-transform: uppercase" in the CSS file which can then be overridden by users who find capitals harder to read. I only looked at the front page. -- Silas S Brown http://people.ds.cam.ac.uk/ssb22 "The nearer the dawn the darker the night" - Longfellow |
|
From: K2, S. K. <Sen...@sp...> - 2019-05-13 16:13:21
|
Hi Team, We need an information regarding the NVDA synthesizer. Our requirement is to read the Chinese language in NVDA. For that purpose we have downloaded a eSpeak SAPI5 from the below link and installed. http://espeak.sourceforge.net/test/latest.html But after installation the synthesizer doesn't show in the NVDA. Please see the below snapshot [cid:image001.png@01D509D0.1F9AE4B0] Kindly help us to resolve this issue or else provide any better way to read the Chinese language in NVDA. Thanks in advance Regards, Senthil Kumar k SPi Global Extn. 657 / 608 M +91 9962393314 sen...@sp...<mailto:sen...@sp...> www.spi-global.com<http://www.spi-global.com/> |
|
From: Silas S. B. <ss...@ca...> - 2018-06-18 13:45:15
|
Dear Ricky, unfortunately this list is no longer active because the original author of eSpeak, Jonathan Duddington, has been uncontactable for several years. But there are some developers working on a next-generation version of eSpeak called eSpeak NG, so if you are interested in further eSpeak development then you might like to try eSpeak NG and join the eSpeak NG community instead. They are using the following address: https://groups.io/g/espeak-ng/ Silas -- Silas S Brown http://people.ds.cam.ac.uk/ssb22 "Man prefers to believe what he prefers to be true." - Francis Bacon |
|
From: Ricky L. <ri...@ia...> - 2018-06-17 22:48:28
|
Hi again all sorry Apparently in Jaws Sapi offers language selections though not sure how to find this in Window-Eyes? Ricky Lomey |
|
From: Ricky L. <ri...@ia...> - 2018-06-17 22:47:45
|
Hi list I have heard of Espeak for years but only recently started using it and just found and joined this list literally hours ago. I am still using the free version of Window-Eyes for office, I think 9.5, with Windows ten and Espeak is one of two speech engines I have and works reasonably well, actually very well enough for my needs currently, however I notice that, though Espeak works when I press enter, having selected the voice language I require, one of the things it says when one tabs further is C activate so what's the difference between or benefit of activating it instead of just leaving it selected? Thank you. Ricky Lomey |
|
From: Ricky L. <ri...@ia...> - 2018-06-17 22:47:39
|
Hi again list Apart from Espeak, SAPI is the other selected speech programme and I must say I am happier with it on this Windows ten Del using the free Window-Eyes for Office than I was with those on the paid Window-Eyes I had, one I'm still using on my previous Windows two thousand Pentium four. However, of the three SAPI voices given, Microsoft David English United States and Hazel English Great Britain, stop at certain points in messages, obviously different points in different messages and I then have to down arrow to that point and do control plus shift plus R every time, why would this be or how can this be resolved as I have no idea. Thanks. Ricky Lomey |
|
From: Ricky L. <ri...@ia...> - 2018-06-17 22:47:25
|
Hi again list final time I notice that with Window-Eyes, the Espeak voices of other languages, all those I've tried, try to pronounce things like numbers in English even though not using the English voice, say 2018 can be like twenty-eighteen though in Afrikaans and Dutch it should be twintig-agtien or 30 should be dertig, not sure how to get into the Espeak programme or if only the synth is given so I change this, how can I change this? The only dictionary I come across is the actual Window-Eyes dictionary, I find both the Afrikaans and Dutch voices equally good for my specific needs apart from this. Thanks again. Ricky Lomey |