When Muratcan Cicek’s mother tried to enroll him in first grade in their homeland of Turkey, she was told the school couldn’t accommodate her son, who was born with cerebral palsy and had a speech impediment.
It took four years but thanks to his mother’s persistence, Cicek’s drive to learn, and a change in Turkish law, Cicek was eventually able to start school—although his father died of lung cancer before he could see it.
Today, the 27-year-old Cicek is a Ph.D. candidate in computer engineering at UC Santa Cruz and one of only 50 people from thousands of applicants to win a prestigious 2020 Google Ph.D. fellowship. His research focus is on machine learning applied to human-computer interaction, specifically working to improve head-based pointing methods for those who cannot use a mouse or trackpad.
“I am really excited about it (the fellowship) because it will make things easier for my research,” Cicek said, speaking over a Zoom call using prototype software that is part of Project Euphonia, a Google research initiative that aims to improve computers’ abilities to understand impaired speech. Fellow UC Santa Cruz electrical and computer engineering Ph.D. candidate Mustafa Mutlu, who is working on plasmonic nano-bio sensor development, also helped translate.
According to Cicek, he was born in a rural part of Turkey but knew from a young age that if he wanted to reach his goals, he needed to move to a bigger city where there were more opportunities.
But how to get there?
“I felt I had to work so hard,” he said.
Taught how to read and do simple math by his mother, he finally joined his classmates for third grade and graduated first in his class in high school. He taught himself how to code and was awarded a scholarship to Ozyegin University in Istanbul after he won the top prize in a competition at a coding camp. He earned a bachelor’s degree in computer science and later studied at Oregon State University. In 2017, he was admitted to UC Santa Cruz’s Ph.D. program in computer engineering and joined Professor Roberto Manduchi’s Computer Vision Lab.
There, he is working with Manduchi to improve computer head-based pointing (HBP) techniques. As it stands now, eye-gazing technology has a number of limitations, and while HBP is more popular among its users, too much light or a too-busy background can often blind the computer camera so the assistive technology doesn’t work. In addition, while a person can move their head in multiple directions—up-and-down, left-to-right, tilting side-to-side, forward-and-back, etc.—current technology only recognizes up-and-down and left-to-right movements. It isn’t natural or ergonomic, according to Manduchi. He and Cicek are researching ways to teach a machine to read varied head movements so the assistive technology is more innate and user friendly.
Having someone like Cicek, who is a direct user of the technology, is important for their research, Manduchi said. Too often, a technological solution is discovered only to find out later that it didn’t fit an actual user’s needs.
Cicek, he said, is not only an excellent coder but “he’s got lots of energy and is extremely motivated.”
This summer, Cicek also interned with Google as part of Project Euphonia, recording more than 2,000 phrases used to build a personalized speech-recognition model that can transcribe his words as he speaks. The idea behind the initiative is to help people with dysarthria (impaired speech) caused by stroke, traumatic brain injury, ALS, and other conditions to communicate more easily, according to a Google website.
And while the prototype Cicek was using for the interview had a few bugs that required Mutlu to step in with a translation here and there, Cicek believes the program will help him communicate better in his career as a computer scientist.
Thanks to companies like Google, he said, he will be able to continue his research to help more people communicate in improved ways.