CHAT TO US ABOUT:
- Our NMAX Inferencing Accelerators
- Flex Logix Interconnect Technology
- InferX and nnMAX Neural Inferencing Solutions
MEET OUR TEAM:
Abhijit Abhyankar
VP Hardware Engineering
Vladimir Bronstein
Director, Inference Software Development
Michel Cekleov
Technical Director, Inference SoC Architecture
Shuying Fan
Sales/Solutions Architect
Andy Jaros
VP Sales
Tony Kozaczuk
Director, Solutions Architecture
Vinay Mehta
Inference Technical Marketing Manager
Aparna Ramachandran
Director Hardware Engineering
Jeremy Roberson
Inference Software Architect & Technical Director
Geoff Tate
CEO
Cheng Wang
Co-Founder & SVP
Fred Ware
Technical Director, Inference Architecture
William Xie
Director, Inference Systems Engineering
Fang-Li Yuan
Manager Hardware Engineering
VIEW OUR LIVE SESSION
Thursday, November 19, 2020, 9:20 AM - 9:45 AM

VIEW OUR USEFUL RESOURCES:
Earlier presentations can be found under Resources at:
ABOUT US:
Description
We enable smarter and more flexible systems and semiconductors with our InferX and nnMAX Neural Inferencing solutions that provide more throughput on tough models for less cost and power.
The InferX X1 Edge Inferencing Co-Processors utilizes our nnMAX Inference Acceleration Architecture that delivers high accuracy throughput at less cost/power than alternatives because it achieves high MAC utilization, therefore uses less silicon, and does it with much less DRAM bandwidth, so it burn fewer $'s and watts on DRAMs. This is all done with batch=1 - critical for edge applications.
For companies seeking an inferencing accelerator to integrate in an SoC, we offer nnMAX as IP which can scale from 1>100 TOPS. nnMAX is available in TSMC 12FFC/16FFC and will be available in Globalfoundries 12LP in 2021.
The InferX X1 Edge Inferencing Co-Processors utilizes our nnMAX Inference Acceleration Architecture that delivers high accuracy throughput at less cost/power than alternatives because it achieves high MAC utilization, therefore uses less silicon, and does it with much less DRAM bandwidth, so it burn fewer $'s and watts on DRAMs. This is all done with batch=1 - critical for edge applications.
For companies seeking an inferencing accelerator to integrate in an SoC, we offer nnMAX as IP which can scale from 1>100 TOPS. nnMAX is available in TSMC 12FFC/16FFC and will be available in Globalfoundries 12LP in 2021.