Jordan Crittenden

BACKGROUND

I am a software engineer currently living in Boulder, CO. My background is in Electrical Engineering and Software Development. I received a Master of Engineering in EE from Cornell University in 2009. I like systems programming, software architecture, and optimization.

Churchill Navigation (Dec 2015 - Present)

Churchill Navigation builds hardware and software for airborne situational awareness, including augmented reality live-mapping, gimbaled camera systems, and video recording.

Churchill has been migrating much of its software infrastructure to a graph-based architecture, where functionality is contained in software nodes implemented as DLLs. During my first year, I rewrote and significantly enhanced the underlying graph API and created a web editor for real-time construction and monitoring of graphs.

Following this, I took the role of project manager and lead developer of ION, our multi-channel video and metadata recorder. I refactored the codebase to leverage the graph architecture, and added functionality to support several new customer installations. This involved improving the core ION software as well as creating several new graph nodes to speak to cameras and other hardware peripherals.

Subsequently I moved into a new role coordinating our engineering team. While roughly half of my time is still spent writing code, the remaining time is spent in management roles. I spearheaded the creation of a formal review and compensation program, implemented periodic technical talks, interviewed and oversaw the hiring of several engineers, instituted regular project meetings, and acted as a communication bridge between the CEO and our engineering team.

Relay Foods (Jun 2010 - Dec 2015)

In the summer after my senior year at Cornell, I worked as an intern for Relay Foods. During this time, I adapted an off the shelf eCommerce package to meet the unique same-day pickup location business model that Relay employed. I also worked on a Blackberry mobile app for cataloging products in our partner stores.

I was recruited back to Relay after working at Sandia National Labs. For the first year, I was the only technology employee. I managed development of the customer facing website and the admin portal, and supported our operations and marketing teams with the reports and tools that they required. I also handled any devops and bugs that arose in any of our technology systems. We slowly grew the team and I managed the new members.

After several years, I moved out of this role and became Vice President of Analytics. In this position I managed our data warehouse and ran various analyses for the company. Among these, I built a convolution based revenue model, analyzed the effect of first order fulfillment errors, and built a breakeven financial model to determine the sensitivity of our profitability on various business metrics. I also acted as a technical resource for other analysts in the company.

During my final year at Relay, I spent more time on special projects, including assisting with migration from Quickbooks to a more mature accounting system, monitoring our spending by department, and evaluating options for a warehouse management system.

Sandia National Labs (Aug 2009 - Jun 2010)

After graduating from my masters program, I worked for a year at Sandia National Laboratories in Albuquerque. I worked on two primary projects - a synthetic aperture radar system and an FPGA based high speed signal processing application. At this time I was recruited back to Relay Foods as their first technology hire.

Elder Research (Summers and Winters 2000 - 2007)

Throughout high school and most of college, I was an intern at Elder Research, Inc, a data mining consulting firm in Charlottesville, VA. At Elder Research I worked on myriad data mining problems, using both commercial and custom modeling software to find patterns in our customers' data. Notable projects that I worked on include

  • The Netflix Prize. This was an open-to-the-public competition to improve the Netflix recommendation engine. Our team was one of the leading contenders, and our technique (multi-model ensembling) was the same as that used by the competition winner.
  • Semantic Text Processor. I worked on a project to process large bodies of text quickly, identifying key phrases and detecting synonyms, using a modified version of locality sensitive hashing. The algorithm was language agnostic, meaning there was no built in understanding of the target language, allowing it to be used on almost any body of text.
  • Custom Data Mining Software. I authored three general purpose data mining tools: a k-nearest neighbor modeler, a custom decision tree algorithm, and a product cross-selling tool.