Search

Szymon Stachniak Phones & Addresses

  • Redmond, WA
  • 9920 119Th St, Kirkland, WA 98034 (425) 285-9594
  • 9920 NE 119Th St #107, Kirkland, WA 98034 (425) 285-9594
  • Kiona, WA
  • 9817 225Th Ave NE, Redmond, WA 98053 (425) 285-9594

Work

Company: Microsoft May 2006 Position: Principal software development lead

Education

Degree: Masters School / High School: York University Specialities: Computer Science

Skills

Programming • Software Engineering • C++ • Software Development • Kinect • Directx • Video Games • Software Design • Hlsl • Algorithms • Agile Methodologies • Machine Learning • Xbox • C# • Xml • Visual Studio • C • 3D Modeling • Integration • Leadership • Object Oriented Design • Distributed Systems • Computational Geometry • Mixed Reality • Unity • Team Leadership • Software Architecture

Languages

English • French • Polish

Interests

Science and Technology • Children • Environment

Industries

Computer Software

Public records

Vehicle Records

Szymon Stachniak

View page
Address:
9817 225 Ave NE, Redmond, WA 98053
VIN:
JM1NC2EF0A0211898
Make:
MAZDA
Model:
MX-5 MIATA
Year:
2010

Resumes

Resumes

Szymon Stachniak Photo 1

Principal Software Development Lead

View page
Location:
Seattle, WA
Industry:
Computer Software
Work:
Microsoft
Principal Software Development Lead

Autodesk Nov 2004 - Mar 2006
3D Software Developer

Autodesk 2005 - 2006
Software Developer

Geotango International Corp Jun 2003 - Nov 2004
3D Software Developer
Education:
York University
Masters, Computer Science
Skills:
Programming
Software Engineering
C++
Software Development
Kinect
Directx
Video Games
Software Design
Hlsl
Algorithms
Agile Methodologies
Machine Learning
Xbox
C#
Xml
Visual Studio
C
3D Modeling
Integration
Leadership
Object Oriented Design
Distributed Systems
Computational Geometry
Mixed Reality
Unity
Team Leadership
Software Architecture
Interests:
Science and Technology
Children
Environment
Languages:
English
French
Polish

Publications

Us Patents

Human Tracking System

View page
US Patent:
8542910, Sep 24, 2013
Filed:
Feb 2, 2012
Appl. No.:
13/365121
Inventors:
Tommer Leyvand - Seattle WA, US
Johnny Lee - Bellevue WA, US
Szymon Stachniak - Kirkland WA, US
Craig Peeper - Kirkland WA, US
Shao Liu - Urbana IL, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06K 9/00
US Classification:
382154, 382103, 382106
Abstract:
An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.

Orienting The Position Of A Sensor

View page
US Patent:
8553934, Oct 8, 2013
Filed:
Dec 8, 2010
Appl. No.:
12/963328
Inventors:
Stanley W. Adermann - Redmond WA, US
Mark Plagge - Sammamish WA, US
Craig Peeper - Kirkland WA, US
Szymon Stachniak - Kirkland WA, US
David C. Kline - Bothell WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06K 9/00
US Classification:
382103, 382154
Abstract:
Techniques are provided for re-orienting a field of view of a depth camera having one or more sensors. The depth camera may have one or more sensors for generating a depth image and may also have an RGB camera. In some embodiments, the field of view is re-oriented based on the depth image. The position of the sensor(s) may be altered to change the field of view automatically based on an analysis of objects in the depth image. The re-orientation process may be repeated until a desired orientation of the sensor is determined. Input from the RGB camera might be used to validate a final orientation of the depth camera, but is not required to during the process of determining new possible orientation of the field of view.

Human Tracking System

View page
US Patent:
8564534, Oct 22, 2013
Filed:
Oct 7, 2009
Appl. No.:
12/575388
Inventors:
Tommer Leyvand - Seattle WA, US
Johnny Lee - Bellevue WA, US
Craig Peeper - Kirkland WA, US
Szymon Stachniak - Kirkland WA, US
Shao Liu - Urbana IL, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G09G 5/00
US Classification:
345156, 715863, 382103
Abstract:
An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.

Systems And Methods For Removing A Background Of An Image

View page
US Patent:
8634636, Jan 21, 2014
Filed:
Oct 7, 2009
Appl. No.:
12/575363
Inventors:
Craig Peeper - Kirkland WA, US
Johnny Lee - Bellevue WA, US
Tommer Leyvand - Seattle WA, US
Szymon Stachniak - Kirkland WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06K 9/00
H04N 15/00
US Classification:
382154, 382173, 348 42
Abstract:
An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed.

In-Home Depth Camera Calibration

View page
US Patent:
20120105585, May 3, 2012
Filed:
Nov 3, 2010
Appl. No.:
12/939038
Inventors:
Prafulla J. Masalkar - Issaquah WA, US
Szymon P. Stachniak - Kirkland WA, US
Tommer Leyvand - Seattle WA, US
Zhengyou Zhang - Bellevue WA, US
Leonardo Del Castillo - Carnation WA, US
Zsolt Mathe - Redmond WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
H04N 13/02
US Classification:
348 46, 348E17002
Abstract:
A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.

Validation Analysis Of Human Target

View page
US Patent:
20120159290, Jun 21, 2012
Filed:
Dec 17, 2010
Appl. No.:
12/972341
Inventors:
Jon D. Pulsipher - North Bend WA, US
Parham Mohadjer - Redmond WA, US
Nazeeh Amin ElDirghami - Redmond WA, US
Shao Liu - Bellevue WA, US
Patrick Orville Cook - Monroe WA, US
James Chadon Foster - Redmond WA, US
Szymon P. Stachniak - Kirkland WA, US
Tommer Leyvand - Seattle WA, US
Joseph Bertolami - Seattle WA, US
Michael Taylor Janney - Sammamish WA, US
Kien Toan Huynh - Redmond WA, US
Charles Claudius Marais - Duvall WA, US
Spencer Dean Perreault - Bellevue WA, US
Robert John Fitzgerald - Kirkland WA, US
Wayne Richard Bisson - Seattle WA, US
Craig Carroll Peeper - Kirkland WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
G06F 11/07
US Classification:
714819, 714E11024
Abstract:
Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.

Systems And Methods For Tracking A Model

View page
US Patent:
20130070058, Mar 21, 2013
Filed:
Nov 15, 2012
Appl. No.:
13/678288
Inventors:
Microsoft Corporation - Redmond WA, US
Tommer Leyvand - Seattle WA, US
Szymon Piotr Stachniak - Kirkland WA, US
Craig Peeper - Kirkland WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
H04N 13/02
G06T 15/00
US Classification:
348 46, 345419, 348E13074
Abstract:
An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.

Gesture Bank To Improve Skeletal Tracking

View page
US Patent:
20130093751, Apr 18, 2013
Filed:
Oct 12, 2011
Appl. No.:
13/271857
Inventors:
Szymon Stachniak - Kirkland WA, US
Ke Deng - Sammamish WA, US
Tommer Leyvand - Seattle WA, US
Scott M. Grant - Woodinville WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
G06T 15/00
US Classification:
345419
Abstract:
A method for obtaining gestural input from a user of a computer system. In this method, an image of the user is acquired, and a runtime representation of a geometric model of the user is computed based on the image. The runtime representation is compared against stored data, which includes a plurality of stored metrics each corresponding to a measurement made on an actor performing a gesture. With each stored metric is associated a stored representation of a geometric model of the actor performing the associated gesture. The method returns gestural input based on the stored metric associated with a stored representation that matches the runtime representation.
Szymon P Stachniak from Redmond, WA, age ~46 Get Report