[UAI] Postdoctoral Fellow – Explainable machine learning in medical data
Title: Postdoctoral Fellow – Explainable machine learning in medical data Location: Downtown Toronto, ON, Canada Start Date: Immediately Closing Date: Until the position is filled The Signal Processing and Oral Communication Lab at the Department of Computer Science, University of Toronto, the Vector Institute for Artificial Intelligence, and the Toronto Rehab Institute – UHN Description: The Signal Processing and Oral Communication Lab (SPOClab) invites applications to the position of Postdoctoral Fellow to work on explainable models in the area of machine learning in medical data. This position will require activities ranging from foundational machine learning, big data analysis, natural language processing, and predictive modeling. This position will involve collaboration with our multi-disciplinary team across computer science and healthcare. The duties of the successful candidate include undertaking high quality independent and collaborative research, applying for grants, and supervising undergraduate and graduate students in the research group. The candidate will be highly encouraged to establish connections and collaborate with other research groups in the Toronto AI community and our research partners across Canada and abroad. The successful candidate will be supervised by Dr Frank Rudzicz. This position is funded for a 2-year period. Salary will be based on the applicant’s previous experience and education. Space will be provided at the Vector Institute. For more information, see https://vectorinstitute.ai Requirements: We are seeking candidates with the following qualifications: * A PhD in Computer Science, Computer Engineering, or a related area. Applicants who have fulfilled all the requirements for PhD award may also apply. * Advanced knowledge in machine learning, deep learning, big data, natural language processing, and predictive analytics. Experience in the healthcare sector, or in healthcare or biomedical applications, is preferred. * Strong programming skills in Python, scikit-learn, TensorFlow, Keras, or related frameworks. * Excellent publication record in top quality journals and conferences. * Excellent communication skills and demonstrate strong leadership skills. The Department of Computer Science hires based on merit and is committed to employment equity. All qualified persons are encouraged to apply. However, Canadian citizens and permanent residents will be given priority. Application: Applications will be accepted until the position is filled. To apply, please send: 1. a one page covering letter highlighting the relevance of your skills, knowledge and experience, and date of availability, 2. a curriculum vitae, including a full publication list, and country of citizenship, 3. a 1-page statement of your research interests, and 4. a copy of your university transcripts to fr...@spoclab.com with the subject line “Postdoc in Machine Learning for Health” -- Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute-UHN; Associate professor (status), Department of Computer Science, University of Toronto; Co-Founder and President, WinterLight Labs Incorporated; Faculty member, Vector Institute; || Website: http://www.cs.toronto.edu/~frank || Twitter: @SPOClab<https://twitter.com/SPOClab> || Phone (office): 416 597 3422 x7971 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] COVFEFE: COre Variable Feature Extraction Feature Extractor
We’re announcing the availability of COVFEFE, the COre Variable Feature Extraction Feature Extractor, at https://github.com/SPOClab-ca/COVFEFE, under the Apache License 2.0. COVFEFE is a fast, multi-threaded tool for running various feature extraction pipelines. A pipeline is a directed acyclic graph where each node is a processing task that sends its output to the next node in the graph. Features include lexicosyntactic (including lexical norms and grammatical complexity), semantic (including information content), pragmatic (including topic modeling and rhetorical structure theory), and acoustic. This tool has been run in a variety of contexts, from assessing speech for signs of neurodegeneration, to analysis of quarterly business reports. -- Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute-UHN; Associate professor (status), Department of Computer Science, University of Toronto; Co-Founder and President, WinterLight Labs Incorporated; Faculty member, Vector Institute; || Website: http://www.cs.toronto.edu/~frank || Twitter: @SPOClab<https://twitter.com/SPOClab> || Phone (office): 416 597 3422 x7971 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] First call for papers: Canadian AI 2019
Call for Papers and Submission AI 2019, the 32nd Canadian Conference on Artificial Intelligence<https://www.caiac.ca/en/conferences/canadianai-2019/home>, invites papers that present original work in all areas of Artificial Intelligence, either theoretical or applied. As in previous years, we aim for accepted papers to be published by Springer in the Lecture Notes in AI<https://www.springer.com/series/1244> series. Topics of interest include, but are not limited to: · Agent Systems · AI Applications · Automated Reasoning · Bioinformatics and BioNLP · Case‐based Reasoning · Cognitive Models · Constraint Satisfaction · Data Mining · E‐Commerce · Evolutionary Computation · Games · Information Retrieval and Search · Information and Knowledge Management · Knowledge Representation · Machine Learning · Multimedia Processing · Natural Language Processing · Neural Nets and Deep Learning · Planning · Privacy‐preserving · Robotics · Uncertainty · User Modeling · Web Mining and Applications We also welcome the submission of position papers, which present evidence-based arguments for a particular point of view without necessarily presenting a new system. There will be an option during the submission process to indicate that a paper is a position paper. Important dates Submission deadline: January 21st, 2019 Author notification: February 25th, 2019 Final papers due: March 10th, 2019 Submission details Submitted papers must be no longer than 12 pages, including references, and must be formatted using the Springer LNCS/LNAI style<http://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines>. We provide a sample of its use here<https://www.caiac.ca/sites/default/files/basic_attachments/LNCS-sample.zip>, but we encourage the use of the most up-to-date LaTeX2e style file available from Springer. Papers submitted to the conference must not have already been published, or accepted for publication, or be under review by a journal or another conference. Submissions will go through a double-blind review process by Program Committee members for originality, significance, technical merit, and clarity of presentation. As such, submissions must be anonymized, and papers which fail to do so will be rejected without review. A "Best Paper Award" and a “Best Student Paper Award” will be given at the conference respectively to the authors of each best paper, as judged by the Best Paper Award Selection Committee. Submissions are accepted via EasyChair using the following link: https://easychair.org/conferences/?conf=canai2019 Regards, Marie-Jean Meurs (Université du Québec à Montréal) and Frank Rudzicz (University of Toronto), Co-Chairs -- Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute-UHN; Associate professor (status), Department of Computer Science, University of Toronto; Co-Founder and President, WinterLight Labs Incorporated; Faculty member, Vector Institute; || Website: http://www.cs.toronto.edu/~frank || Twitter: @SPOClab<https://twitter.com/SPOClab> || Phone (office): 416 597 3422 x7971 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Canadian AI 2019 - Extended deadline -- 28 January 2019
Canadian AI 2019 – Extended deadline: 28 January 2019 The 32nd Canadian Conference on Artificial Intelligence<https://www.caiac.ca/en/conferences/canadianai-2019/home> (AI 2019) will take place in Kingston, Ontario at Queen’s University, May 28 to May 31, 2019. AI 2019 invites papers that present original work in all areas of Artificial Intelligence, either theoretical or applied. As in previous years, we aim for accepted papers to be published by Springer in the Lecture Notes in AI series. NEWS: * Special Track on AI and Society * Keynote Speakers: Prof. Maite Taboada<http://www.sfu.ca/~mtaboada/> and TBA * Tutorial on Deep Learning and NLP by Prof. Xiaodan Zhu<http://www.xiaodanzhu.com/> and Adversarial agents by Prof. Graham Taylor<https://vectorinstitute.ai/team/graham-taylor/> * Graduate Student Symposium Submission Deadline: 18 February 2019 Topics of interest include, but are not limited to: · Agent Systems · AI Applications · Automated Reasoning · Bioinformatics and BioNLP · Case‐based Reasoning · Cognitive Models · Constraint Satisfaction · Data Mining · E‐Commerce · Evolutionary Computation · Games · Information Retrieval and Search · Information and Knowledge Management · Knowledge Representation · Machine Learning · Multimedia Processing · Natural Language Processing · Neural Nets and Deep Learning · Planning · Privacy‐preserving · Robotics · Uncertainty · User Modeling · Web Mining and Applications We also welcome the submission of position papers, which present evidence-based arguments for a particular point of view without necessarily presenting a new system. There will be an option during the submission process to indicate that a paper is a position paper. *Special Track on AI and Society* AI is today a field of research whose impact reaches well beyond technology, promising creation of new services, highly personalized products, task automation, etc. As prospects for innovation are vast, they raise ethical and social concerns, from privacy, security and accountability to transparency and social appropriation. In this context, a CAI2019 Special Track will welcome multi-disciplinary research papers exploring these challenges. Topics of interest for this special track include but are not limited to: . Multidisciplinary research involving AI . Transparency and accountability of learning algorithms . Ethical issues in the development of AI . The future of work, automation and AI . AI, access to justice and human rights . Geopolitics and the new global political economy of AI . Impacts and contributions of AI to social innovation . Effects of AI on living environments and territories . Development of artificial moral agents . Transformations of analysis and artistic creation and AI . Integration of AI into teaching practices . Social relationships mediated by the AI Important dates Submission deadline: January 28th, 2019 ==> NEW DEADLINE ! <== Author notification: February 25th, 2019 Final papers due: March 10th, 2019 Submission details Submitted papers must be no longer than 12 pages, including references, and must be formatted using the Springer LNCS/LNAI style. We provide a sample of its use here, but we encourage the use of the most up-to-date LaTeX2e style file available from Springer. Papers submitted to the conference must not have already been published, or accepted for publication, or be under review by a journal or another conference. Submissions will go through a double-blind review process by Program Committee members for originality, significance, technical merit, and clarity of presentation. As such, submissions must be anonymized, and papers which fail to do so will be rejected without review. A "Best Paper Award" and a “Best Student Paper Award” will be given at the conference respectively to the authors of each best paper, as judged by the Best Paper Award Selection Committee. Submissions are accepted via EasyChair using the following link: https://easychair.org/conferences/?conf=canai2019 Program co-chairs Marie-Jean Meurs, Université du Québec à Montréal (UQAM) Frank Rudzicz, University of Toronto Publication The conference proceedings will be published by Springer in the Lecture Notes in Artificial Intelligence (LNCS/LNAI). A paper will be accepted either as a long or as a short paper. Long papers will be allocated 12 pages while short papers will be allocated 6 pages in the proceedings. Authors of accepted papers will be allocated time for an oral presentation at the conference and will have the opportunity to present their work in a poster session. At least one author of each accepted paper is required to attend the conference to present the work. Authors must agree to this requirement prior to submitting their paper for review. Expanded versions of selected papers representing mature work will be invited to a special journal issue of Computational Intellige
[UAI] Michael J Fox postdoctoral fellowship in NLP for health
Title: Michael J Fox Postdoctoral Fellow – Machine learning in medical data Location: Downtown Toronto, ON, Canada Start Date: Immediately Closing Date: Until the position is filled Department of Computer Science, University of Toronto, The Vector Institute for Artificial Intelligence, and The Li Ka Shing Knowledge Institute at St Michael’s Hospital Description: We are seeking a skilled postdoctoral fellow whose expertise intersects machine learning and natural language processing, especially with speech. The candidate is expected to make novel contributions to these disciplines in the context of healthcare. Specifically, this will include clinical speech recognition, and predictive analytics using modern neural networks, using data from a variety of neurodegenerative diseases including Alzheimer’s, Parkinson’s, and dementia review. The successful candidate will be designated as a Michael J Fox Postdoctoral Fellow. The duties of the successful candidate include undertaking high quality independent and collaborative research, and supervising undergraduate and graduate students in the research group. The candidate will be highly encouraged to establish connections and collaborate with other research groups in the Toronto AI community and our research partners across Canada and abroad. The successful candidate will be supervised by Dr Frank Rudzicz. This position is funded for a 2-year period. Requirements: We are seeking candidates with the following qualifications: * A PhD in Computer Science, Computer Engineering, or a related area. Applicants who have fulfilled all the requirements for PhD award may also apply. * Advanced knowledge in machine learning, deep learning, big data, natural language processing, and predictive analytics. Experience in the healthcare sector, or in healthcare or biomedical applications, is preferred. * Strong programming skills in Python, scikit-learn, TensorFlow, PyTorch, or related frameworks. * Excellent publication record in top quality journals and conferences. * Excellent communication skills and demonstrate strong leadership skills. The Department of Computer Science hires based on merit and is committed to employment equity. All qualified persons are encouraged to apply. However, Canadian citizens and permanent residents will be given priority. Application: Applications will be accepted until the position is filled. To apply, please send: 1. a one page covering letter highlighting the relevance of your skills, knowledge and experience, and date of availability, 2. a curriculum vitae, including a full publication list, and country of citizenship, 3. a 1-page statement of your research interests, and 4. a copy of your university transcripts to fr...@spoclab.com with the subject line “Postdoc in Machine Learning for Health” Dr Frank Rudzicz Scientist, Li Ka Shing Knowledge Institute at St Michael’s Hospital; Associate professor, Department of Computer Science, University of Toronto; Director of AI, Surgical Safety Technologies Incorporated; Co-Founder, WinterLight Labs Incorporated; Faculty member, Vector Institute for Artificial Intelligence; Submissions by e-mail are required. After an initial screening, selected applicants will be asked to forward three academic and/or professional letters of reference. -- Frank Rudzicz, PhD Scientist, International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St Michael’s Hospital Associate Professor, Department of Computer Science, University of Toronto; Director of AI, Surgical Safety Technologies Inc Co-Founder, WinterLight Labs Inc Faculty Member, Vector Institute for Artificial Intelligence; >> http://www.cs.toronto.edu/~frank >> @SPOClab<https://twitter.com/SPOClab> ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Call for Papers: Special Issue on Speech & Dementia in Computer Speech and Language
This is a call for papers for a Special Issue on Speech & Dementia Automatic Screening for Dementia from Spoken Communication, to be published in early 2020 in Computer Speech and Language, an official publication of the International Speech Communication Association. Dementia is an incurable progressive disease that ranks first among the age-related fears of people aged 60+ years and affects about 50 million people worldwide, a number that is estimated to double every 20 years. In 2018, costs exceeded the $1 trillion USD mark, with 90% incurred in the high-income countries. While no preventive measures nor curative therapeutic interventions for dementia are known yet, studies show that early interventions can delay the progression of the disease. Thus, it is pivotal to recognize symptoms as early as possible. Unfortunately, current diagnostic procedures require a thorough examination by medical specialists, which are too cost- and time-consuming to be provided frequently on a large scale. Spoken language skills are well established early indicators of cognitive abilities. Since speech is the most important means of communication used on a daily basis, monitoring of relevant indicators offers great potential for easy-to-use casual testing. Recently, assessment systems based on automatic speech processing methods have been developed which automatically extract relevant acoustic and linguistic features from spoken conversations, in order to interpret signs of cognitive decline and thus supporting clinicians in the diagnosis of dementia. Such systems could improve current diagnostic practice by providing easy-to-use, low-cost means of detecting and tracking early signs of dementia, which currently cannot be offered due to cost, time, and lack of human resources. The special issue on Speech and Dementia will bring together researchers from the fields of speech and language processing, medicine, psychology, as well as disciplines related to health and aging, and thus will contribute to the advancement of cross-disciplinary speech and language research. Topics of interest include (but are not limited to): * Speech or language resources for detection and tracking of dementia * Speech and language related features for cognitive assessment (e.g. MCI, dementia) * Detection of early signs of dementia from speech and language data * Longitudinal tracking of dementia * User-evaluation and field trials of dementia detection * Methods, algorithms and tools for detection and tracking of dementia * Spoken communication systems for monitoring, assisting or activating people with dementia For any questions regarding submission of papers related to the overall scope of Speech and Dementia but outside the above specified topics, please do not hesitate to contact the Editors. Submission procedure: Prospective authors should prepare manuscripts according to the information available at https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guide-for-authors and submit electronically through the online CSL submission system. When selecting a manuscript type, authors must click on 'Special Issue on Speech and Dementia'. Important Dates: Manuscript submission: 15 Jun 2019 Final decision: 31 Mar 2020 Expected publication date: June 2020 Editors: Tanja Schultz (University Bremen, DEU) tanja.schu...@uni-bremen.de Heidi Christensen (University of Sheffield, GBR) heidi.christen...@sheffield.ac.uk Frank Rudzicz (University of Toronto, CAN) fr...@spoclab.com Johannes Schroder (University Heidelberg, DEU) johannes.schroe...@med.uni-heidelberg.de Webpage: http://csl.uni-bremen.de/SI-SpeechAndDementia -- Frank Rudzicz, PhD Scientist, International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St Michael’s Hospital Associate Professor, Department of Computer Science, University of Toronto; Director of AI, Surgical Safety Technologies Inc Co-Founder, WinterLight Labs Inc Faculty Member, Vector Institute for Artificial Intelligence; Inaugural CIFAR Chair in AI >> http://www.cs.toronto.edu/~frank >> @SPOClab<https://twitter.com/SPOClab> ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] NAACL-HLT 2010: Student Research Workshop Call for Papers
NAACL HLT 2010 Student Research Workshop Call for Papers 1. General Invitation for Submissions The Student Research Workshop (SRW) is an established tradition at ACL conferences. The workshop provides a venue for student researchers investigating topics in computational linguistics and natural language processing to present their work and receive feedback from a general audience as well as from panelists. The panelists are experienced researchers who will prepare in-depth comments and questions in advance of the presentation. We invite student researchers to submit their work to this workshop. Since the SRW is an excellent testing ground for your ideas before an international audience of experts, the emphasis of the workshop will be on work in progress. Admissible research can derive from any topic area within computational linguistics and can be applicable to either speech or text. A list of topic areas is provided in the Call for Papers for the NAACL HLT 2010 Conference and is available at: http://naaclhlt2010.isi.edu/cfp.html 2. Submission Requirements The emphasis of the workshop is original and unpublished research. The papers should describe original work in progress. Students who have decided on their thesis direction but still have significant research left to do are particularly encouraged to submit papers. Since the main purpose of presenting at the workshop is to exchange ideas with other researchers and to receive helpful feedback for further development of the work, papers should clearly indicate directions for future research wherever appropriate. All authors of multi-author papers MUST be students. Papers submitted for this workshop are eligible only if they have not been presented at any other meeting with publicly available published proceedings. Students who have already presented at an ACL/EACL/NAACL Student Research Workshop may not submit to this workshop. These students should submit their papers to the main conference instead. 3. Submission Procedure Submission will be electronic using the paper submission web page below: https://www.softconf.com/naaclhlt2010/srw/ Submissions should follow the two-column format of ACL proceedings and should not exceed six (6) pages, including references. Note that papers shorter than 6 pages may also be submitted in consideration for poster presentation rather than oral presentation. We strongly recommend the use of ACL LaTeX style files or Microsoft Word style files tailored for this year's conference. These files are available at: http://naaclhlt2010.isi.edu/authors.html A description of the format is available there in case you are unable to use these style files directly. All submissions must be electronic: please use the submission website above to submit your paper. 4. Reviewing Procedure The review procedure of papers submitted to the Student Workshop will be managed by the Student Research Workshop Co-Chairs, with the assistance of a team of reviewers. Each submission will be matched with a mixed panel of student and senior researchers for review. The final acceptance decision will be based on the results of these reviews. The review process is double-blind; therefore, please ensure that your paper shows the title, but no other information that can identify the author(s). For example, rather than this: ''We showed previously (Smith, 2001), ...'', use citations such as: ''Smith (2001) previously showed ...''. 5. Schedule The papers must be submitted no later than 11:59 EST, February 15, 2010. No papers received after this deadline will be accepted. Acknowledgement will be emailed soon after receipt. Notification of acceptance will be sent to authors (by email) on March 15, 2010. Detailed formatting guidelines for the preparation of the final camera-ready copy will be provided to authors with their acceptance notice. Important Dates: * Papers due: February 15, 2010 * Notification of acceptance: March 15, 2010 * Camera ready papers due: April 12, 2010 * Conference date: June 1-6, 2010 The Student Research Workshop will be held during the NAACL HLT 2010 conference. 6. Contact Information If you need to contact the co-chairs of the Student Workshop, please use: naacl-hlt2010...@cs.uiuc.edu An e-mail sent to this address will be forwarded to all co-chairs. Julia Hockenmaier (Faculty Advisor) University of Illinois, Urbana, Illinois, USA Diane Litman (Faculty Advisor) University of Pittsburgh, Pittsburgh, Pennsylvania, USA Adriane Boyd (NLP Co-Chair) Ohio State University, Columbus, Ohio, USA Mahesh Joshi (NLP Co-Chair) Carnegie Mellon University, Pittsburgh, Pennsylvania, USA Frank Rudzicz (Speech Co-Chair) University of Toronto, Toronto, Ontario, Canada ___ naacl-hlt2010srw mailing list naacl-hlt2010...@cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/naacl-hlt2010srw ___
[UAI] Open position: Postdoctoral Fellow in Machine Learning and Computational Linguistics
Postdoctoral Fellow in Machine Learning and Computational Linguistics We are seeking a skilled postdoctoral fellow whose expertise intersects machine learning and computational linguistics. The candidate is expected to make novel contributions to these disciplines in the context of healthcare. The domain of the research is largely open-ended. This may include textual processing of the medical record, speech recognition with atypical or pathological voices, and human-computer dialogue using modern recurrent neural networks, especially with situated robots. Work can commence as soon as August 2017. The initial contract is for 1 year although extension is possible. The successful applicant will have: 1) A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, or a relevant discipline; 2) Evidence of impact in research through a strong publication record in relevant venues; 3) Evidence of strong collaborative skills, including possible supervision of junior researchers, students, or equivalent industrial experience; 4) Excellent interpersonal, written, and oral communication skills; 5) A strong technical background in machine learning, natural language processing, and speech recognition. Experience in human-computer interaction is an asset. Experience with clinical populations is preferred. This work will be conducted at the Toronto Rehabilitation Institute and at the University of Toronto. -- Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Co-Founder and President, WinterLight Labs Inc. Director, SPOClab (signal processing and oral communications) || Website: http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Postdoctoral Fellow position in Machine Learning in Healthcare
We are seeking a skilled postdoctoral fellow whose expertise intersects machine learning and text analytics. The candidate is expected to make novel contributions to these disciplines in the context of healthcare. Specifically, this will include textual processing of the medical record, clinical speech recognition, and predictive analytics using modern neural networks. This work will be conducted at St Michael’s Hospital, the University Health Network, and the Vector Institute in Toronto Canada, and can commence as soon as January 2018. The initial contract is for 1 year although extension is possible. The successful applicant will have: 1) A doctoral degree in computer science, electrical engineering, biomedical engineering, or a relevant discipline; 2) Evidence of impact in research through a strong publication record in relevant venues; 3) Demonstrated programming ability in a research context, or equivalent industrial experience; 4) Excellent interpersonal, written, and oral communication skills; 5) A strong technical background in machine learning, natural language processing, or speech recognition. Experience in clinical settings is preferred. Please send your CV, a cover letter, and a sample of your writing to Frank Rudzicz at fr...@cs.toronto.edu by 1 December 2017. ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Two new professorships at Sheridan College in Intelligent Virtual Agents/HCI/Machine Learning
applied aspects of knowledge to a broad range of students and other audiences * Committed to excellence in research, teaching, and learning and to working within a team environment Please apply online: https://careers-sheridancollege.icims.com<https://careers-sheridancollege.icims.com/> Sheridan welcomes diversity in the workplace and encourages applications from all qualified individuals, including visible minorities, Indigenous People, and persons with disabilities. In accordance with the Accessibility for Ontarians with Disabilities Act (AODA), Sheridan is committed to accommodating applicants with disabilities throughout the hiring process. At any stage of the hiring process, Human Resources will work with applicants requesting accommodation. Note: Copies of educational credentials are requested at the time of an interview. As a condition of employment, Sheridan requires confirmation of educational credentials in the form of an official Canadian transcript or an official evaluation of international credentials which determines Canadian equivalency. Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor (status only), Department of Computer Science, University of Toronto; Co-Founder and President, WinterLight Labs Incorporated; Faculty member, Vector Institute; Director, SPOClab (signal processing and oral communications) [cid:image003.png@01D1543C.1E079780] || Website: http://www.cs.toronto.edu/~frank || Twitter: @SPOClab<https://twitter.com/SPOClab> || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] First CFP - 4th annual workshop on Speech and Language Processing for Assistive Technologies
We are pleased to announce the first call for papers for the fourth Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), to be co-located with Interspeech 2013 in Grenoble in August, 2013. The deadline for submission of papers and demo proposals is 17 May and 31 May, respectively. Full details on the workshop, topics of interest, timeline and formatting of regular papers is here: http://slpat.org/slpat2013 This 2-day workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. This workshop will provide an opportunity for individuals from both research communities, and the individuals with whom they are working, to assist to share research findings, and to discuss present and future challenges and the potential for collaboration and progress. General topics include but are not limited to: . Automated processing of sign language . Speech synthesis and speech recognition for physical or cognitive impairments . Speech transformation for improved intelligibility . Speech and Language Technologies for Assisted Living . Translation systems; to and from speech, text, symbols and sign language . Novel modeling and machine learning approaches for AAC/AT applications . Text processing for improved comprehension, e.g., sentence simplification or text-to-speech . Silent speech: speech technology based on sensors without audio . Symbol languages, sign languages, nonverbal communication . Dialogue systems and natural language generation for assistive technologies . Multimodal user interfaces and dialogue systems adapted to assistive technologies . NLP for cognitive assistance applications . Presentation of graphical information for people with visual impairments . Speech and NLP applied to typing interface applications . Brain-computer interfaces for language processing applications . Speech, natural language and multimodal interfaces to assistive technologies . Assessment of speech and language processing within the context of assistive technology . Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols . Deployment of speech and NLP tools in the clinic or in the field . Linguistic resources; corpora and annotation schemes . Evaluation of systems and components, including methodology . Anything included in this year's special topic . Other topics in Augmentative and Alternative Communication This year we are introducing a special topic, which is Smart Homes and ambient intelligent technology applied to augmentative communication. Relevant research topics would include (but are not limited to): . Automatic Speech recognition in multi-source environments . Distant speech recognition . Understanding, modelling or recognition of aged speech . Speech analysis in the case of elderly with impairments, early recognition of speech capability loss . Assistive speech technology . Multimodal speech recognition (context-aware ASR) . Multimodal emotion recognition . Audio scene and smart home context analysis . Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living Please contact the conference organizers at slpat2013.works...@gmail.com with any questions. Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank (personal) >> <http://spoclab.ca/> http://spoclab.ca (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] 2nd CFP, SLPAT13: 4th annual workshop on Speech and Language Processing for Assistive Technologies
--== SLPAT13 ==-- The 4th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT) 21 and 22 August 2013, Grenoble France (satellite event of Interspeech 2013). ==> Submission deadlines: 17 May (research papers) and 31 May (demo proposals) <== Full details: http://slpat.org/slpat2013 Contact: slpat2013.works...@gmail.com Colleagues, We invite you to join us in Grenoble for the 4th annual workshop on Speech and Language Processing for Assistive Technologies. This 2-day workshop will combine research in speech and language technology that assists people with physical, cognitive, sensory, emotional, or developmental disabilities. This year we are introducing a special topic -- Smart Homes and ambient intelligent technology applied to augmentative communication. The program committee is now online at http://www.slpat.org/slpat2013/people.html. We are also happy to announce that we are now a special group of both the Association for Computational Linguistics and the International Speech Communication Association. We look forward to being a part of both communities. General topics of SLPAT13 include but are not limited to: . Automated processing of sign language . Speech synthesis and speech recognition for physical or cognitive impairments . Speech transformation for improved intelligibility . Speech and Language Technologies for Assisted Living . Translation systems; to and from speech, text, symbols and sign language . Novel modeling and machine learning approaches for AAC/AT applications . Text processing for improved comprehension, e.g., sentence simplification or text-to-speech . Silent speech: speech technology based on sensors without audio . Symbol languages, sign languages, nonverbal communication . Dialogue systems and natural language generation for assistive technologies . Multimodal user interfaces and dialogue systems adapted to assistive technologies . NLP for cognitive assistance applications . Presentation of graphical information for people with visual impairments . Speech and NLP applied to typing interface applications . Brain-computer interfaces for language processing applications . Speech, natural language and multimodal interfaces to assistive technologies . Assessment of speech and language processing within the context of assistive technology . Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols . Deployment of speech and NLP tools in the clinic or in the field . Linguistic resources; corpora and annotation schemes . Evaluation of systems and components, including methodology . Anything included in this year's special topic . Other topics in Augmentative and Alternative Communication The special topic this year is smart homes and intelligent companions. Subtopics include: . Automatic Speech recognition in distant or multi-source environments . Understanding, modelling or recognition of aged speech . Speech analysis in the case of elderly with impairments, early recognition of speech capability loss . Multimodal speech recognition (context-aware ASR) . Multimodal emotion recognition . Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living This year, SLPAT will be co-located with the 1st Workshop on Affective Social Speech Signals (WASSS, http://wasss-2013.imag.fr/, which takes place on 22 and 23 August 2013). Participation in and submission to both workshops will be facilitated by reduced registration fees for double-registration (rather than registering for both individually), co-ordination of topics on the overlapping day (22 August) to enable participation in both, and common lunch and events combining the two communities. We look forward to your submissions! Regards, Organizing Committee, SLPAT13 Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank (personal) >> <http://spoclab.ca/> http://spoclab.ca (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] SLPAT 2013, third call for papers
SLPAT 2013, 3rd call for papers The 4th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT). 21 and 22 August 2013, Grenoble France (satellite event of Interspeech 2013). ==> Submission deadlines: 27 May (research papers) and 3 June (demo proposals) <== Full details: http://slpat.org/slpat2013 Contact: slpat2013.works...@gmail.com Colleagues, We invite you to join us in Grenoble for the 4th annual workshop on Speech and Language Processing for Assistive Technologies. This 2-day workshop will combine research in speech and language technology that assists people with physical, cognitive, sensory, emotional, or developmental disabilities. This year we are introducing a special topic -- Smart Homes and ambient intelligent technology applied to augmentative communication. The program committee is now online at http://www.slpat.org/slpat2013/people.html. It is our pleasure to announce that Professor Mark Hawley will be delivering an Invited Lecture on the first day of the workshop. Mark Hawley is Professor of Health Services Research at the University of Sheffield, UK, where he leads the Rehabilitation and Assistive Technology Research Group. He is also Honorary Consultant Clinical Scientist at Barnsley Hospital, where he is Head of Medical Physics and Clinical Engineering. Over the last 20 years, he has worked as a clinician and researcher -- providing, researching, developing and evaluating assistive technology, telehealth and telecare products and services for disabled people, older people and people with long-term conditions. This year's workshop will include a tour of a smart home at the Laboratory of Informatics of Grenoble. More details will become available on the SLPAT 2013 website. General topics of SLPAT 2013 include but are not limited to: - Automated processing of sign language - Speech synthesis and speech recognition for physical or cognitive impairments - Speech transformation for improved intelligibility - Speech and Language Technologies for Assisted Living - Translation systems; to and from speech, text, symbols and sign language - Novel modeling and machine learning approaches for AAC/AT applications - Text processing for improved comprehension, e.g., sentence simplification or text-to-speech - Silent speech: speech technology based on sensors without audio - Symbol languages, sign languages, nonverbal communication - Dialogue systems and natural language generation for assistive technologies - Multimodal user interfaces and dialogue systems adapted to assistive technologies - NLP for cognitive assistance applications - Presentation of graphical information for people with visual impairments - Speech and NLP applied to typing interface applications - Brain-computer interfaces for language processing applications - Speech, natural language and multimodal interfaces to assistive technologies - Assessment of speech and language processing within the context of assistive technology - Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols - Deployment of speech and NLP tools in the clinic or in the field - Linguistic resources; corpora and annotation schemes - Evaluation of systems and components, including methodology - Anything included in this year's special topic - Other topics in Augmentative and Alternative Communication The special topic this year is smart homes and intelligent companions. Subtopics include: - Automatic Speech recognition in distant or multi-source environments - Understanding, modelling or recognition of aged speech - Speech analysis in the case of elderly with impairments, early recognition of speech capability loss - Multimodal speech recognition (context-aware ASR) - Multimodal emotion recognition - Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living This year, SLPAT will be co-located with the 1st Workshop on Affective Social Speech Signals (WASSS,http://wasss-2013.imag.fr/, which takes place on 22 and 23 August 2013). Participation in and submission to both workshops will be facilitated by reduced registration fees for double-registration (rather than registering for both individually), co-ordination of topics on the overlapping day (22 August) to enable participation in both, and common lunch and events combining the two communities. We look forward to your submissions! Regards, Organizing Committee, SLPAT 2013 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Postdoctoral fellowship in speech communication and human-robot interaction for assistive technology
We are seeking a skilled postdoctoral fellow (PDF) whose expertise intersects automatic speech recognition (ASR) and human-robot interaction (HRI). The PDF will work with a team of internationally recognized researchers to create an automated speech-based dialogue system between computers and robotic systems, and individuals with dementia and other cognitive impairments. These systems will automatically adapt the vocabularies, language models, and acoustic models of the component ASR to data collected from individuals with Alzheimer's disease. Moreover, this system will analyze the linguistic and acoustic features of a user's voice to infer the user's cognitive and linguistic abilities, and emotional state. These abilities and mental states will in turn be used to adapt a speech output system to be more tuned to the user. Work will involve programming, data analysis, dissemination of results (e.g., papers and conferences), and partial supervision of graduate and undergraduate students. Some data collection may also be involved. The successful applicant will have: 1)A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, or a relevant discipline; 2)Evidence of impact in research through a strong publication record in relevant venues; 3)Evidence of strong collaborative skills, including possibly supervision of junior researchers, students, or equivalent industrial experience; 4)Excellent interpersonal, written, and oral communication skills; 5)A strong technical background in machine learning, natural language processing, robotics, and human-computer interaction. This work will be conducted at the Toronto Rehabilitation Institute, which is affiliated with the University of Toronto. --== About the Toronto Rehabilitation Institute ==-- One of North America's leading rehabilitation sciences centres, Toronto Rehabilitation Institute (TRI) is revolutionizing rehabilitation by helping people overcome the challenges of disabling injury, illness ,or age-related health conditions to live active, healthier, more independent lives. It integrates innovative patient care, ground-breaking research and diverse education to build healthier communities and advance the role of rehabilitation in the health system. TRI, along with Toronto Western, Toronto General, and Princess Margaret Hospitals, is a member of the University Health Network and is affiliated with the University of Toronto. If interested, please send a brief (1-2 page) statement of purpose, an up-to-date resume, and contact information for 3 references to Alex Mihailidis (alex.milhaili...@utoronto.ca) and Frank Rudzicz (fr...@cs.toronto.edu) by 31 July 2013. The position will remain open until filled. Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> http://www.cs.toronto.edu/~frank (personal) >> http://spoclab.ca <http://spoclab.ca/> (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] SLPAT 2013, FINAL call for papers
SLPAT 2013, FINAL call for papers The 4th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT). 21 and 22 August 2013, Grenoble France (satellite event of Interspeech 2013). ==> Submission deadlines: 27 May (research papers) and 3 June (demo proposals) <== Full details: http://slpat.org/slpat2013 Contact: slpat2013.works...@gmail.com Colleagues, We invite you to join us in Grenoble for the 4th annual workshop on Speech and Language Processing for Assistive Technologies. This 2-day workshop will combine research in speech and language technology that assists people with physical, cognitive, sensory, emotional, or developmental disabilities. This year we are introducing a special topic -- Smart Homes and ambient intelligent technology applied to augmentative communication. The program committee is now online at http://www.slpat.org/slpat2013/people.html. It is our pleasure to announce that Professor Mark Hawley will be delivering an Invited Lecture on the first day of the workshop. Mark Hawley is Professor of Health Services Research at the University of Sheffield, UK, where he leads the Rehabilitation and Assistive Technology Research Group. He is also Honorary Consultant Clinical Scientist at Barnsley Hospital, where he is Head of Medical Physics and Clinical Engineering. Over the last 20 years, he has worked as a clinician and researcher -- providing, researching, developing and evaluating assistive technology, telehealth and telecare products, and services for disabled people, older people, and people with long-term conditions. This year's workshop will include a tour of a smart home at the Laboratoire d'Informatique de Grenoble. This smart home (called "DOMUS", http://domus.imag.fr/) is an environment for researchers working on smart spaces and ambient intelligence. DOMUS is a 40 sq. metre apartment composed of typical rooms (e.g., office, bedroom, bathroom, and kitchen with dining area) and furnishings. The entire apartment is fitted with sensors and actuators and is controlled by a home automation system. General topics of SLPAT 2013 include but are not limited to: - Automated processing of sign language - Speech synthesis and speech recognition for physical or cognitive impairments - Speech transformation for improved intelligibility - Speech and Language Technologies for Assisted Living - Translation systems; to and from speech, text, symbols and sign language - Novel modeling and machine learning approaches for AAC/AT applications - Text processing for improved comprehension, e.g., sentence simplification or text-to-speech - Silent speech: speech technology based on sensors without audio - Symbol languages, sign languages, nonverbal communication - Dialogue systems and natural language generation for assistive technologies - Multimodal user interfaces and dialogue systems adapted to assistive technologies - NLP for cognitive assistance applications - Presentation of graphical information for people with visual impairments - Speech and NLP applied to typing interface applications - Brain-computer interfaces for language processing applications - Speech, natural language and multimodal interfaces to assistive technologies - Assessment of speech and language processing within the context of assistive technology - Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols - Deployment of speech and NLP tools in the clinic or in the field - Linguistic resources; corpora and annotation schemes - Evaluation of systems and components, including methodology - Anything included in this year's special topic - Other topics in Augmentative and Alternative Communication The special topic this year is smart homes and intelligent companions. Subtopics include: - Automatic Speech recognition in distant or multi-source environments - Understanding, modelling or recognition of aged speech - Speech analysis in the case of elderly with impairments, early recognition of speech capability loss - Multimodal speech recognition (context-aware ASR) - Multimodal emotion recognition - Applications of speech technology (ASR, dialogue, synthesis) for ambient assisted living This year, SLPAT will be co-located with the 1st Workshop on Affective Social Speech Signals (WASSS, http://wasss-2013.imag.fr/), which takes place on 22 and 23 August 2013). Participation in and submission to both workshops will be facilitated by reduced registration fees for double-registration (rather than registering for both individually), co-ordination of topics on the overlapping day (22 August) to enable participation in both, and common lunch and events combining the two communities. We look forward to your submissions! Regards, Organizing Committee, SLPAT 2013 Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Inst
[UAI] SLPAT 2013 deadline extension: 3 June
==> SLPAT 2013 deadline extension: 3 June <== The deadline for paper submissions to the SLPAT 2013 workshop on Speech and Language Processing for Assistive Technologies (http://www.slpat.org/slpat2013/) has been extended by one week to 3 June, in order to match the demo submission due date. We will also accept revisions of previously submitted papers up to that time. All other dates remain unchanged. Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank (personal) >> <http://spoclab.ca/> http://spoclab.ca (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Postdoctoral fellowship in speech communication and human-robot interaction in rehabilitation
We are seeking a skilled postdoctoral fellow (PDF) whose expertise intersects automatic speech recognition (ASR) and human-robot interaction (HRI). The PDF will work with a team of internationally recognized researchers to create an automated speech-based dialogue system between computers and robotic systems, and individuals with dementia and other cognitive impairments. These systems will automatically adapt the vocabularies, language models, and acoustic models of the component ASR to data collected from individuals with Alzheimer's disease. Moreover, this system will analyze the linguistic and acoustic features of a user's voice to infer the user's cognitive and linguistic abilities, and emotional state. These abilities and mental states will in turn be used to adapt a speech output system to be more tuned to the user. Work will involve programming, data analysis, dissemination of results (e.g., papers and conferences), and partial supervision of graduate and undergraduate students. Some data collection may also be involved. The successful applicant will have: 1)A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, or a relevant discipline; 2)Evidence of impact in research through a strong publication record in relevant venues; 3)Evidence of strong collaborative skills, including possibly supervision of junior researchers, students, or equivalent industrial experience; 4)Excellent interpersonal, written, and oral communication skills; 5)A strong technical background in machine learning, natural language processing, robotics, and human-computer interaction. This work will be conducted at the Toronto Rehabilitation Institute, which is affiliated with the University of Toronto. --== About the Toronto Rehabilitation Institute ==-- One of North America's leading rehabilitation sciences centres, Toronto Rehabilitation Institute (TRI) is revolutionizing rehabilitation by helping people overcome the challenges of disabling injury, illness ,or age-related health conditions to live active, healthier, more independent lives. It integrates innovative patient care, ground-breaking research and diverse education to build healthier communities and advance the role of rehabilitation in the health system. TRI, along with Toronto Western, Toronto General, and Princess Margaret Hospitals, is a member of the University Health Network and is affiliated with the University of Toronto. If interested, please send a brief (1-2 page) statement of purpose, an up-to-date resume, and contact information for 3 references to Alex Mihailidis (alex.milhaili...@utoronto.ca) and Frank Rudzicz (fr...@cs.toronto.edu) by 31 July 2013. The position will remain open until filled. Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank (personal) >> <http://spoclab.ca/> http://spoclab.ca (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Postdoctoral fellowship in neuroscience for speech recognition -- Toronto
Postdoctoral fellowship in neuroscience for speech recognition We are seeking a skilled postdoctoral fellow (PDF) whose expertise intersects automatic speech recognition (ASR) and neuroscience to develop a next-generation model of speech production. Approximately 10% of North Americans have some sort of communication disorder. It is imperative that technology is used to mitigate difficulties these individuals have in being understood. This research involves building a model of how speech is produced physically and in the brain, and translating it directly into automatic speech recognition. Specifically, we propose to build an advanced neural network that relates words and phrases across electroencephalographic (EEG) data, acoustic data, and measurements of how the important articulators in speech (e.g., the lips and tongue) move. This model of speech production will be built from data recorded with people with cerebral palsy and healthy controls. The PDF will work with a team of internationally recognized researchers in computer science, speech-language pathology, and neuroscience. Work will involve programming, data analysis, dissemination of results (e.g., papers and conferences), and partial supervision of graduate and undergraduate students. Some data collection will also be involved. The successful applicant will have: 1) A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, neuroscience, or a relevant discipline; 2) Evidence of impact in research through a strong publication record in relevant venues; 3) Evidence of strong collaborative skills, including possibly supervision of junior researchers, students, or equivalent industrial experience; 4) Excellent interpersonal, written, and oral communication skills; 5) A strong technical background in machine learning, natural language processing, robotics, and human-computer interaction. This work will be conducted at the Toronto Rehabilitation Institute and the University of Toronto. About the Toronto Rehabilitation Institute One of North America's leading rehabilitation sciences centres, Toronto Rehabilitation Institute is revolutionizing rehabilitation by helping people overcome the challenges of disabling injury, illness or age related health conditions to live active, healthier, more independent lives. It integrates innovative patient care, ground-breaking research and diverse education to build healthier communities and advance the role of rehabilitation in the health system. Toronto Rehab, along with Toronto Western, Toronto General and Princess Margaret Hospitals, is a member of the University Health Network and affiliated with the University of Toronto. Applicants should send 1) a full CV, 2) a representative sample of their work, and 3) a 1-page statement of purpose to Frank Rudzicz at fr...@cs.toronto.edu by 1 December 2013. ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] CFP -- Special issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology (SLPAT)
Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity. Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons. Topics of interest for submission to this special issue include (but are not limited to): • Speech, natural language and multimodal interfaces designed for people with physical or cognitive impairments • Applications of speech and NLP technology (automatic speech recognition, synthesis, dialogue, natural language generation) for AT applications • Novel modeling and machine learning approaches for AT applications • Long-term adaptation of speech/NLP based AT system to user's change • User studies, overview of speech/NLP technology for AT: understanding the user's needs and future speech and language based technologies. • Understanding, modeling and recognition of aged or disordered speech • Speech analysis and diagnosis: automatic recognition and detection of speech pathologies and speech capability loss • Speech-based distress recognition • Automated processing of symbol languages, sign language and nonverbal communication including translation systems. • Text and audio processing for improved comprehension and intelligibility, e.g., sentence simplification or text-to-speech • Evaluation methodology of systems and components in the lab and in the wild. • Resources; corpora and annotation schemes • Other topics in AAC, AAL, and AT Submission process Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. Submissions should follow the journal's suggested writing format ( <http://www.gccis.rit.edu/taccess/authors.html> http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central <http://mc.manuscriptcentral.com/taccess> http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Important dates: ◦ Full paper submission: 31st March 2014 ◦ Response to authors: 30th June 2014 ◦ Revised submission deadline: 31st August 2014 ◦ Notification of acceptance: 31st October 2014 ◦ Final manuscripts due: 30th November 2014 Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer
[UAI] First CFP - 5th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT) at ACL 2014
We are pleased to announce the first call for papers for the fifth annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT), to be co-located with ACL 2014 in Baltimore in June 2014. The deadline for submission of papers and demo proposals is 21 March. Full details on the workshop, topics of interest, timeline, and formatting of regular papers can be found here here: http://www.slpat.org/slpat2014 This 2-day workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. This workshop will provide an opportunity for individuals from both research communities, and the individuals with whom they are working, to assist to share research findings, and to discuss present and future challenges and the potential for collaboration and progress. General topics include but are not limited to: . Automated processing of sign language . Speech synthesis and speech recognition for physical or cognitive impairments . Speech transformation for improved intelligibility . Speech and Language Technologies for Assisted Living . Translation systems; to and from speech, text, symbols and sign language . Novel modeling and machine learning approaches for AAC/AT applications . Text processing for improved comprehension, e.g., sentence simplification or text-to-speech . Silent speech: speech technology based on sensors without audio . Symbol languages, sign languages, nonverbal communication . Dialogue systems and natural language generation for assistive technologies . Multimodal user interfaces and dialogue systems adapted to assistive technologies . NLP for cognitive assistance applications . Presentation of graphical information for people with visual impairments . Speech and NLP applied to typing interface applications . Brain-computer interfaces for language processing applications . Speech, natural language and multimodal interfaces to assistive technologies . Assessment of speech and language processing within the context of assistive technology . Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols . Deployment of speech and NLP tools in the clinic or in the field . Linguistic resources; corpora and annotation schemes . Evaluation of systems and components, including methodology . Anything included in this year's special topic . Other topics in Augmentative and Alternative Communication Please contact the conference organizers at slpat2014-works...@googlegroups.com with any questions. Important dates: 21 March: Paper/demo submissions due 11 April: Notification of acceptance 28 April: Camera-ready papers due 26 - 27 June: SLPAT workshop We look forward to seeing you! The organizing committee of SLPAT 2014, Jan Alexandersson, DFKI, Germany Dimitra Anastasiou, University of Bremen, Gernany Cui Jian, SFB/TR 8 Spatial Cognition, University of Bremen, Germany Ani Nenkova, University of Pennsylvania, USA Rupal Patel, Northeastern University, USA Frank Rudzicz, Toronto Rehabilitation Institute and University of Toronto, Canada Annalu Waller, University of Dundee, Scotland Desislava Zhekova, University of Munich, Germany Frank Rudzicz, PhD. Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Science Officer, Thotra Incorporated >> <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank (personal) >> <http://spoclab.ca/> http://spoclab.ca (lab) ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Second CFP - 5th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT) at ACL 2014
Greetings, We are pleased to announce the second call for papers for the fifth annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT), to be co-located with ACL 2014 in Baltimore in June 2014 (http://www.cs.jhu.edu/ACL2014/). The deadline for submission of papers and demo proposals is 21 March. Full details on the workshop, topics of interest, timeline, and formatting of regular papers can be found here: http://www.slpat.org/slpat2014 This 1-day workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. This workshop will provide an opportunity for individuals from various research communities, and the individuals with whom they are working, to assist in and share research findings, and to discuss present and future challenges and opportunities. General topics include but are not limited to: . Automated processing of sign language . Speech synthesis and speech recognition for physical or cognitive impairments . Speech transformation for improved intelligibility . Speech and language technologies for assisted living . Translation systems; to and from speech, text, symbols and sign language . Novel modeling and machine learning approaches for AAC/AT applications . Text processing for improved comprehension, e.g., sentence simplification or text-to-speech . Silent speech: speech technology based on sensors without audio . Symbol languages, sign languages, nonverbal communication . Dialogue systems and natural language generation for assistive technologies . Multimodal user interfaces and dialogue systems adapted to assistive technologies . NLP for cognitive assistance applications . Presentation of graphical information for people with visual impairments . Speech and NLP applied to typing interface applications . Brain-computer interfaces for language processing applications . Speech, natural language and multimodal interfaces to assistive technologies . Assessment of speech and language processing within the context of assistive technology . Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols . Deployment of speech and NLP tools in the clinic or in the field . Linguistic resources; corpora and annotation schemes . Evaluation of systems and components, including methodology . Anything included in this year's special topic . Other topics in Augmentative and Alternative Communication Please contact the conference organizers at slpat2014-works...@googlegroups.com <mailto:slpat2014-works...@googlegroups.com> with any questions. Important dates: 21 March: Paper/demo submissions due 11 April: Notification of acceptance 28 April: Camera-ready papers due 26 - 27 June: SLPAT workshop We look forward to seeing you! The organizing committee of SLPAT 2014, Jan Alexandersson, DFKI, Germany Dimitra Anastasiou, University of Bremen, Gernany Cui Jian, SFB/TR 8 Spatial Cognition, University of Bremen, Germany Ani Nenkova, University of Pennsylvania, USA Rupal Patel, Northeastern University, USA Frank Rudzicz, Toronto Rehabilitation Institute and University of Toronto, Canada Annalu Waller, University of Dundee, Scotland Desislava Zhekova, University of Munich, Germany Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Executive Officer, Thotra Incorporated || Website: <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] 2nd CFP -- Special issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology (SLPAT)
Second Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity. Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons. Topics of interest for submission to this special issue include (but are not limited to): •Speech, natural language and multimodal interfaces designed for people with physical or cognitive impairments •Applications of speech and NLP technology (automatic speech recognition, synthesis, dialogue, natural language generation) for AT applications •Novel modeling and machine learning approaches for AT applications •Long-term adaptation of speech/NLP based AT system to user's change •User studies, overview of speech/NLP technology for AT: understanding the user's needs and future speech and language based technologies. •Understanding, modeling and recognition of aged or disordered speech •Speech analysis and diagnosis: automatic recognition and detection of speech pathologies and speech capability loss •Speech-based distress recognition •Automated processing of symbol languages, sign language and nonverbal communication including translation systems. •Text and audio processing for improved comprehension and intelligibility, e.g., sentence simplification or text-to-speech •Evaluation methodology of systems and components in the lab and in the wild. •Resources; corpora and annotation schemes •Other topics in AAC, AAL, and AT Submission process Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. Submissions should follow the journal's suggested writing format ( <http://www.gccis.rit.edu/taccess/authors.html> http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central <http://mc.manuscriptcentral.com/taccess> http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Important dates: ◦ Full paper submission: 31st March 2014 ◦ Response to authors: 30th June 2014 ◦ Revised submission deadline: 31st August 2014 ◦ Notification of acceptance: 31st O
[UAI] Third CFP - 5th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT) at ACL 2014
Greetings, We are pleased to announce the third call for papers for the fifth annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT), to be co-located with ACL 2014 in Baltimore in June 2014 (http://www.cs.jhu.edu/ACL2014/). The deadline for submission of papers and demo proposals is 21 March. Full details on the workshop, topics of interest, timeline, and formatting of regular papers can be found here: http://www.slpat.org/slpat2014 The paper submission website is here: https://www.softconf.com/acl2014/SLPAT2014/ This 1-day workshop will bring together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. This workshop will provide an opportunity for individuals from various research communities, and the individuals with whom they are working, to assist in and share research findings, and to discuss present and future challenges and opportunities. General topics include but are not limited to: . Automated processing of sign language . Speech synthesis and speech recognition for physical or cognitive impairments . Speech transformation for improved intelligibility . Speech and language technologies for assisted living . Translation systems; to and from speech, text, symbols and sign language . Novel modeling and machine learning approaches for AAC/AT applications . Text processing for improved comprehension, e.g., sentence simplification or text-to-speech . Silent speech: speech technology based on sensors without audio . Symbol languages, sign languages, nonverbal communication . Dialogue systems and natural language generation for assistive technologies . Multimodal user interfaces and dialogue systems adapted to assistive technologies . NLP for cognitive assistance applications . Presentation of graphical information for people with visual impairments . Speech and NLP applied to typing interface applications . Brain-computer interfaces for language processing applications . Speech, natural language and multimodal interfaces to assistive technologies . Assessment of speech and language processing within the context of assistive technology . Web accessibility; text simplification, summarization, and adapted presentation modes such as speech, signs or symbols . Deployment of speech and NLP tools in the clinic or in the field . Linguistic resources; corpora and annotation schemes . Evaluation of systems and components, including methodology . Anything included in this year's special topic . Other topics in Augmentative and Alternative Communication Please contact the conference organizers at slpat2014-works...@googlegroups.com <mailto:slpat2014-works...@googlegroups.com> with any questions. Important dates: 21 March: Paper/demo submissions due 11 April: Notification of acceptance 28 April: Camera-ready papers due 26 June: SLPAT workshop We look forward to seeing you! The organizing committee of SLPAT 2014, Jan Alexandersson, DFKI, Germany Dimitra Anastasiou, University of Bremen, Gernany Cui Jian, SFB/TR 8 Spatial Cognition, University of Bremen, Germany Ani Nenkova, University of Pennsylvania, USA Rupal Patel, Northeastern University, USA Frank Rudzicz, Toronto Rehabilitation Institute and University of Toronto, Canada Annalu Waller, University of Dundee, Scotland Desislava Zhekova, University of Munich, Germany Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Executive Officer, Thotra Incorporated || Website: <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] 3rd CFP -- Special issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology (SLPAT)
Third Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity. Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons. Topics of interest for submission to this special issue include (but are not limited to): • Speech, natural language and multimodal interfaces designed for people with physical or cognitive impairments • Applications of speech and NLP technology (automatic speech recognition, synthesis, dialogue, natural language generation) for AT applications • Novel modeling and machine learning approaches for AT applications • Long-term adaptation of speech/NLP based AT system to user's change • User studies, overview of speech/NLP technology for AT: understanding the user's needs and future speech and language based technologies. • Understanding, modeling and recognition of aged or disordered speech • Speech analysis and diagnosis: automatic recognition and detection of speech pathologies and speech capability loss • Speech-based distress recognition • Automated processing of symbol languages, sign language and nonverbal communication including translation systems. • Text and audio processing for improved comprehension and intelligibility, e.g., sentence simplification or text-to-speech • Evaluation methodology of systems and components in the lab and in the wild. • Resources; corpora and annotation schemes • Other topics in AAC, AAL, and AT Submission process Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. Submissions should follow the journal's suggested writing format ( <http://www.gccis.rit.edu/taccess/authors.html> http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central <http://mc.manuscriptcentral.com/taccess> http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Important dates: ◦ Full paper submission: 31st March 2014 ◦ Response to authors: 30th June 2014 ◦ Revised submission deadline: 31st August 2014 ◦ Notification
[UAI] DEADLINE EXTENSION -- 28 April 2014 -- Special issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology
Deadline extension - Special Issue of ACM Transactions on Accessible Computing (TACCESS) on Speech and Language Interaction for Daily Assistive Technology (SLPAT) Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Please note that to accommodate other recent calls for papers, we are extending the deadline for full paper submission to the ACM Transactions on Accessible Computing (TACCESS) Special Issue on Speech and Language Interaction for Daily Assistive Technology. We are also adjusting the response to authors deadline. The new dates are as follows: * ==> Full paper submission: 28th April 2014 <== * Response to authors: 14th July 2014 * Revised submission deadline: 31st August 2014 * Notification of acceptance: 31st October 2014 * Final manuscripts due: 30th November 2014 Submission process oContributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. oSubmissions should follow the journal's suggested writing format ( <http://www.gccis.rit.edu/taccess/authors.html> http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central <http://mc.manuscriptcentral.com/taccess> http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Topics of interest for submission to this special issue include (but are not limited to): Speech, natural language and multimodal interfaces designed for people with physical or cognitive impairments Applications of speech and NLP technology (automatic speech recognition, synthesis, dialogue, natural language generation) for AT applications Novel modeling and machine learning approaches for AT applications Long-term adaptation of speech/NLP based AT system to user's change User studies, overview of speech/NLP technology for AT: understanding the user's needs and future speech and language based technologies. Understanding, modeling and recognition of aged or disordered speech Speech analysis and diagnosis: automatic recognition and detection of speech pathologies and speech capability loss Speech-based distress recognition Automated processing of symbol languages, sign language and nonverbal communication including translation systems. Text and audio processing for improved comprehension and intelligibility, e.g., sentence simplification or text-to-speech Evaluation methodology of systems and components in the lab and in the wild. Resources; corpora and annotation schemes Other topics in AAC, AAL, and AT Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Executive Officer, Thotra Incorporated || Website: <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] ***Deadline Extension, correction*** SLPAT 2014 at ACL 2014 -- 7 April 2014
*** DEADLINE EXTENSION *** 5th annual workshop on Speech and Language Processing for Assistive Technologies (SLPAT) at ACL 2014 ==> 7 April 2014 <== We are extending the deadline for paper submission to the SLPAT 2014 workshop to 7 April 2014 (23h59 GMT). This workshop brings together researchers from all areas of speech and language technology with a common interest in making everyday life more accessible for people with physical, cognitive, sensory, emotional, or developmental disabilities. More information on this workshop can be found here: http://www.slpat.org/slpat2014 There are two tracks: regular papers and demos; we welcome 4-to-8-page submissions to both tracks. The paper submission website is here: https://www.softconf.com/acl2014/SLPAT2014/ --== Important dates ==-- 7 April: Paper/demo submissions due 21 April: Notification of acceptance <-- correction 28 April: Camera-ready papers due 26 June: SLPAT workshop at ACL 2014 in Baltimore Maryland Please contact the conference organizers at slpat2014-works...@googlegroups.com <mailto:slpat2014-works...@googlegroups.com> with any questions. We look forward to seeing you! The organizing committee of SLPAT 2014, Jan Alexandersson, DFKI, Germany Dimitra Anastasiou, University of Bremen, Gernany Cui Jian, SFB/TR 8 Spatial Cognition, University of Bremen, Germany Ani Nenkova, University of Pennsylvania, USA Rupal Patel, Northeastern University, USA Frank Rudzicz, Toronto Rehabilitation Institute and University of Toronto, Canada Annalu Waller, University of Dundee, Scotland Desislava Zhekova, University of Munich, Germany Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Executive Officer, Thotra Incorporated || Website: <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai
[UAI] Postdoctoral fellowship in speech communication with robots for people with Alzheimer's disease
--== POSTDOCTORAL FELLOWSHIP in speech communication with robots for people with Alzheimer's disease ==-- Employer: Toronto Rehabilitation Institute and the University of Toronto Title: PostDoc Specialty: Machine learning, natural language processing, human-computer interaction Location: Toronto Ontario Canada Deadline: Until filled Date Posted: 31 March, 2014 We are seeking a skilled postdoctoral fellow (PDF) whose expertise intersects automatic speech recognition (ASR) and human-computer interaction (HCI). The PDF will work with a team of internationally recognized researchers on software for two-way speech-based dialogue between individuals with Alzheimer's disease (AD) and robot 'caregivers'. This software will automatically adapt the vocabularies, language models, and acoustic models of the component ASR to data collected from individuals with AD. The type of speech produced by the robot in response to human activity is vital, and several statistical models of dialogue will be pursued, including partially-observable Markov decision processes. Work will involve software development, data analysis, dissemination of results (e.g., papers and conferences), and partial supervision of graduate and undergraduate students. Some data collection may be involved. Although primarily a technological intervention, this work is highly multidisciplinary, with a strong connection to the field of speech-language pathology and clinical practice. The successful applicant will have: 1) A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, or a relevant discipline; 2) Evidence of impact in research through a strong publication record in relevant venues; 3) Evidence of strong collaborative skills, including possible supervision of junior researchers, students, or equivalent industrial experience; 4) Excellent interpersonal, written, and oral communication skills; 5) A strong technical background in machine learning, natural language processing, and human-computer interaction. Experience with clinical populations, especially those with dementia or Alzheimer's disease, is preferred. This work will be conducted at the Toronto Rehabilitation Institute and at the University of Toronto. Toronto Rehab has a diverse workforce and is an equal opportunity employer. Work can commence as soon as June 2014. The initial contract is for 1 year although extension is possible; the project itself will last 3 years. Please contact Dr. Frank Rudzicz by email at fr...@cs.toronto.edu <mailto:fr...@cs.toronto.edu> with any questions or with 1) your up-to-date CV, 2) a cover letter, 3) a short 1-page statement of purpose if interested in applying to the position. Frank Rudzicz, PhD Scientist, Toronto Rehabilitation Institute; Assistant professor, Department of Computer Science, University of Toronto; Founder and Chief Executive Officer, Thotra Incorporated || Website: <http://www.cs.toronto.edu/~frank> http://www.cs.toronto.edu/~frank || Phone (office) : 416 597 3422 x7971 || Fax : 416 597 3031 ___ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai