KUAN-WEI TSENG

KUAN-WEI TSENG

Computer Vision Engineer

Download Resume

ABOUT

Alps
Alps
I am currently a Computer Vision Engineer at Mujin, Inc. I earned my M.S. degree from Tokyo Institute of Technology, where I was advised by Prof. Ikuro Sato. I had the privilege of collaborating with Prof. Rei Kawakami and Prof. Satoshi Ikehata on human motion prediction with transformer models. During my graduate studies, I interned at Apple Japan, developing automation tools, and at the Denso IT Lab, focusing on interactive object segmentation.
In prior to my graduate study, I have been worked in the Image and Vision Lab (imLab) and AI Application and Integration Lab (AI^2 Lab) at the National Taiwan University (NTU) with Prof. Chu-Song Chen and Prof. Yi-Ping Hung as a research associate (assistant). I received my B.S. degree in Engineering from National Taiwan University, working with Prof. Yi-Ping Hung.
My research interests lies in the area of visual computing, focusing on 3D computer vision, deep learning, and interactive technologies.

EDUCATION

2022.04 - 2024.03

Degree Program

Tokyo Institute of Technology (Tokyo Tech)

M.S., Department of Computer Science

2022 - 2024

2016 - 2020

Degree Program

National Taiwan University (NTU)

B.S. in Engineering, Department of Mechanical Engineering

2016 - 2020

  • Joined Image and Vision Lab (imLab), directed by Dr. Yi-Ping Hung at the Dept. of Computer Science as a research student since 2018. (See Research Experience)
  • Received multidisciplinary training in both software and hardware, conducted researchs on computer vision, robotics, and human computer interaction.

Research Advisor: Prof. Yi-Ping Hung.

Summer 2017

Exchange Student

University of Oxford

Advanced Humanity and English Language Programme

Summer 2017

  • Study abroad as an Exchange Student from National Taiwan University.
  • Comprehensive academic English program covering topics such as academic and creative writing, debating, British culture, job seeking skills, etc.

EXPERIENCE

2023.03 - 2023.09

Robotics Intern

Apple Inc.

Field Design Engineering

Robotics Intern

2023

  • Working on robotics solution (localization and mapping, point cloud segmentation) for product and services validation.

2022.11 - 2023.02

Research Intern

Denso IT Laboratory

R&D Group

Research Intern

2022.11 - 2023.02

  • Developed interactive object segmentation using transformer-based model for fast data annotation.

Mentor: Dr. Teppei Suzuki.

2022.10 - Current

Research Assistant

Tokyo Institute of Technology (Tokyo Tech)

Department of Computer Science

Research Assistant

2022 - Current

  • Working on human motion generation projects.

Advisor: Prof. Ikuro Sato, Prof. Rei Kawakami

2019.09 - 2022.10

Research Assistant

National Taiwan University (NTU)

Department of Computer Science & Information Engineering

Research Assistant

2019 - 2022

  • Developed learning-based methods for camera pose estimation, image depth estimation, video stabilization, view synthesis and style transfer.
  • Developed visual-inertial fusion methods for camera and object pose estimation. Quantified the influence of IMU noise parameters on localization performance.
  • Developed interactive systems with visual, auditory, olfactory, or haptic feedbacks for augemented reality and virtual reality applications.

See more in Publications and Projects.

Advisor: Prof. Chu-Song Chen, Prof. Yi-Ping Hung.

Spring 2021 & Spring 2022

Teaching Assistant

National Taiwan University (NTU)

Department of Computer Science & Information Engineering

Teaching Assistant

Spring 2022 & Spring 2023

  • [CSIE 5249] 3D Computer Vision with Deep Learning Applications
    Instructor: Prof. Chu-Song Chen
  • [CSIE 5079] Pattern Analysis and Classification
    Instructor: Prof. Yi-Ping Hung.
  • [CSIE 4004] Computer Science and Information Technology (II)
    Instructor: Prof. Chu-Song Chen

PUBLICATIONS

Alps
Real-Time Object Pose Tracking System with Low Computational Cost for Mobile Devices
Yo-Chung Lau, Kuan-Wei Tseng, Peng-Yuan Kao, I-Ju Hsieh, Hsiao-Ching Tseng, and Yi-Ping Hung
IEEE Journal of Indoor and Seamless Positioning and Navigation, 2023.
Alps
VIUNet: Deep Visual–Inertial–UWB Fusion for Indoor UAV Localization
Peng-Yuan Kao, Hsui-Jui Chang, Kuan-Wei Tseng, Timothy Chen, He-Lin Luo, Yi-Ping Hung
IEEE ACCESS 2023
Alps
Pseudo-3D Scene Modeling for Virtual Reality Using Stylized Novel View Synthesis
Kuan-Wei Tseng, Jing-Yuan Huang, Yang-Sheng Chen, Chu-Song Chen, Yi-Ping Hung
ACM SIGGRAPH 2022 Posters
Alps
Artistic Style Novel View Synthesis Based on A Single Image
Kuan-Wei Tseng, Yao-Chih Lee, Chu-Song Chen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022.
Alps
3D Video Stabilization with Depth Estimation by CNN-based Optimization
Yao-Chih Lee, Kuan-Wei Tseng, Yu-Ta Chen, Chien-Cheng Chen, Chu-Song Chen, Yi-Ping Hung
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Alps
aBio: Active Bi-Olfactory Display Using Subwoofers for Virtual Reality
You-Yang Hu, Yao-Fu Jan, Kuan-Wei Tseng, You-Shin Tsai, Hung-Ming Sung, Jin-Yao Lin, Yi-Ping Hung
ACM International Conference on Multimedia (ACMMM), 2021.
ORAL PAPER, BEST STUDENT PAPER
Alps
PixStabNet: Fast Multi-Scale Deep Online Video Stabilization with Pixel-Based Warping
Yu-Ta Chen, Kuan-Wei Tseng, Yao-Chih Lee, Chun-Yu Chen, Yi-Ping Hung
IEEE International Conference on Image Processing (ICIP), 2021.
Alps
Augmented Tai-Chi Chuan Practice Tool with Pose Evaluation
Yao-Fu Jan, Kuan-Wei Tseng, Peng-Yuan Kao, Yi-Ping Hung
IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), 2021.
ORAL PAPER
Alps
Camera Ego-Positioning Using Sensor Fusion and Complementary Method
Peng-Yuan Kao, Kuan-Wei Tseng, Tian-Yi Shen, Yan-Bin Song, Kuan-Wen Chen, Shih-Wei Hu, Sheng-Wen Shih, and Yi-Ping Hung
Pattern Recognition. ICPR International Workshops and Challenges, 2021.

PREPRINTS

Alps
Globally Consistent Video Depth and Pose Estimation with Efficient Test-Time Training

Dense depth and pose estimation is a vital prerequisite for various video applications. We present GCVD, a globally consistent method for learning-based video structure from motion (SfM) in this paper. It improves the robustness of learning-based methods with flow-guided keyframes and well-established depth prior.

Labor omnia vincit (Hard work conquers all. - Virgil)