Skip to content

toshikurauchi/toshikurauchi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

Typing SVG

or, according to my ID, Andrew Toshiaki Nakayama Kurauchi.

LinkedIn

I am an Assistant Professor in Computer Science and Engineering at Insper, with a primary focus on teaching. I am passionate about building interactive applications and tools, especially in assistive technology and teaching contexts. I particularly enjoy working with eye trackers and gaze-based interaction.

What I've built for research

These are some of my research projects:

CameraMouseSuite [cross-platform version]

Qt implementation of Camera Mouse Suite, a mouse-replacement interface that allows users to control the mouse pointer using body movements (e.g. head) captured by a webcam. As the user moves their head (or other body part being tracked by the camera), the mouse pointer replicates their movement. Clicks are performed with dwell time (keeping the mouse pointer still for a certain amount of time).


Haytham Linux

Cross-platform mobile gaze tracking software based on Haytham by Diako Mardanbegi. The mobile eye tracker must be built using at least two cameras: one to capture the scene and the other to capture the eye image. Some infrared light must be attached to the eye camera and an infrared filter (an exposed film will do) must also be added.


EyeSwipe

Gaze-based text entry method that uses gaze gestures to type words instead of typing letter by letter with dwell-time. The initial and final letters of the word are indicated by performing an eye gesture called "reverse crossing", in which the user looks at a button displayed above the key and then looks back at the key to finish the selection.


Swipe&Switch

An evolution of EyeSwipe that uses context switching between regions as a selection method. There are three regions: Text, Action, and Gesture regions. To type a word, the user looks at the Gesture region, glances at the letters that form the desired word and then moves their gaze to either the Text or Action regions.


HMAGIC: Head Movement and Gaze Input Cascaded Pointing

Head Movement And Gaze Input Cascaded (HMAGIC) pointing is a technique that combines head movement and gaze-based inputs in a fast and accurate mouse-replacement interface. The interface initially places the pointer at the estimated gaze position and then the user makes fine adjustments with their head movements.


Heatmap Explorer

An interactive gaze data visualization tool for the evaluation of computer interfaces. Heatmap Explorer allows the experimenter to control the visualization by selecting temporal intervals and adjusting filter parameters of the eye movement classification algorithm.

What I've built for teaching

Here's some stuff I've built for the courses I teach:

About

GitHub Profile

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published