WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)
            
CEVA PR Rebrand SemiWiki 800x100 231207 (1)
WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)

Webinar: ADAS and Real-Time Vision Processing

Webinar: ADAS and Real-Time Vision Processing
by Daniel Nenni on 12-13-2017 at 7:00 am

20825-adas-real-time-vision-processing.jpg

ADAS is in many ways the epicenter of directions in the driverless car (or bus or truck). Short of actually running the car hands-free through a whole trip, ADAS has now advanced beyond mere warnings to providing some level of steering and braking control (in both cases for collision avoidance), providing more adaptive cruise control, adapting lighting (high-beam/low-beam), parking assistance, the list goes on. All of this requires a greatly enhanced idea of what is going on around the car and that takes lots of sensors (cameras, radar, lidar, …), lots of sensor fusion and a very significant level of image recognition. This starts with some very sophisticated signal processing. You might want to learn more about the camera monitoring part of this, including monitoring you, the not always reliable driver) by watching an upcoming CEVA webinar on the topic.

Register Now for this important Webinar on January 3[SUP]rd[/SUP] at 7:00am PST

Now if you want to build this kind of solution into your ADAS system (CEVA is looking particularly at Tier-1 providers here), you could start by building up all that image recognition technology yourself or you could start with a solution that already incorporates detection engines for pedestrian, vehicles, lane markers and moving objects (deer for example, a major cause of injuries and car damage where I live; bears and cows are generally fatal all round). The platform also includes CEVA’s programmable vision platform, to which you can add your own differentiated image processing.

The great thing about emerging ADAS technologies is the amazing capabilities they enable. The bad thing from a provider point of view is the incredible range of technologies you have to bring together, and qualify, to make all those amazing feature possible. Why start from scratch when companies like CEVA have already done the base-level heavy lifting for you?

Register Now for this important Webinar on January 3[SUP]rd[/SUP] at 7:00am PST

Summary
As the automotive market experiences accelerated growth and rapid adoption of vision applications such as Camera Monitoring Systems, Smart Rear Cameras, and Driver Monitoring Systems, there is a need for solutions that are both efficient and cost effective to address these applications in high volumes. In addition, these solutions must also allow for Tier-1s to both differentiate and meet the growing demands in performance from today’s OEMs.

NextChip’s APACHE4 is a vision-based pre-processor SoC targeting next-generation ADAS systems that uses a dedicated sub-system of image processing accelerators and optimized software. The APACHE4 incorporates dedicated detection engines that include pedestrian detection, vehicle detection, lane detection and moving object detection and have incorporated CEVA’s programmable vision platform into the APACHE4 alongside its differentiated image processing accelerators to enable advanced and affordable ADAS applications.

Join CEVA and Nextchip experts to learn about:
· Challenges of ADAS and vision based autonomous driving
· Overview of Nextchip APACHE4 ADAS SOC
· Utilization of CEVA-XM4 for differentiation and performance
· Applications use cases with APACHE4 and CEVA-XM4

Target Audience

Computer vision engineers, deep learning application designers, project managers, marketing experts and others interested in embedded vision, machine learning, and autonomous driving.

Speakers
Jeff VanWashenova
Director, Automotive Segment Marketing, CEVA

Young-Jun Yoo
Director, Strategic Marketing, Nextchip

About CEVA, Inc.
CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.