2016年のアジェンダ

カンファレンス1日目

4月26日 (火) ―カンファレンス1日目

  1. 8:00

    登録手続き

  2. 8:45

    議長による開会の挨拶

    Salah Hadi | Global R&D Director Vision & Night Vision Systems of Autoliv Vision Systems, Sweden

欧州、アジア、米国の自動車メーカーおよびティア1企業のビジョンと識見

自動車メーカーがイメージセンサー業界による重点的な取り組みの必要性を感じている短期、中期、長期的な課題

  1. 9:00

    開会基調講演

    Dr Bakhtiar Litkouhi | Technical Fellow and the Manager of Automated Driving and Vehicle Control System of General Motors, USA

  2. 9:30

    自律走行の分野における国際的なティア1企業である日立のビジョン

    Dr Massimiliano Lenardi | Laboratory Manager - Senior Researcher of HITACHI, France and Germany

    Hitachi’s Automotive Vision targets the supply of reliable components and telematics solutions enabling self-driving vehicles. Insights will be given on technologies and services needed to deploy autonomous driving, in particular on those provided by Hitachi, from total-sensing to control and communication technologies.

  3. 10:00

    基調講演

  4. 10:30

    パネルディスカッション−イメージセンサーのシステム、用途、今後のスケジュール:短期、中期的に焦点となる領域

    • How are customer expectations changing for IS applications in cars?
    • What are the latest challenges of integrating all the modules?
    • What are the key challenges OEMs want the IS ecosystem to address?
    • How should components suppliers approach the auto market – what do they need to consider to industrialise their ideas?
    • What is the future of interoperability?

    Chair
    Salah Hadi, Global R&D Director Vision Systems, Autoliv, Sweden

    Panellists include:

    Henrik Lind, Technical Expert, Volvo Car Corporation, Sweden

    David Sánchez Fernández, ADAS Technical Specialist, Jaguar Land Rover, UK

    Dr Bakhtiar Litkouhi, ‎Technical Fellow. Manager: Automated Driving & Vehicle Control System, General Motors Research & Development, USA

    Dr Massimiliano Lenardi, Laboratory Director, Hitachi Europe Ltd

  5. 11:15

    休憩

  6. 11:45

    2020年までのユーロNCAPのビジョン

    Richard Schram | Technical Manager of Euro NCAP, Belgium

    The presentation will show the current developments of test and assessment protocols and will outline Euro NCAP’s vision towards 2020 and beyond.

  7. 12:15

    プログラム可能な自動車用ヘッドライト:時速100マイルに対応する高性能な照明とイメージングシステム

    Srinivasa Narasimhan | Associate Professor Robotics Institute of Carnegie Mellon University, USA

    • Gain insign into a new design for a headlight that can be programmed to perform several tasks simultaneously and that can sense, react and adapt quickly to any environment with the goal of increasing safety for all drivers on the road
    • Explore the engineering challenges in building this headlight as a high-throughput, low-latency platform for computational imaging and lighting
    • Assess experiences with the prototypes developed over the past two years
  8. 12:45

    講演者を交えた昼食会

カンファレンス2日目

4月27日 (水) −カンファレンス2日目

  1. 8:00

    出席者受付

  2. 9:00

    議長による開会の挨拶

自律走行実現に向けてのロードマップ

  1. 9:15

    自律走行:イメージセンサーの進歩と自律走行の展望

    Ian Riches | Global Automotive Practice of Strategy Analytics

    • Answering the “ultimate” question: When will driverless cars be mainstream?
    • ADAS market development
      • What functions & features are driving growth?
      • How is the role of image sensors evolving?    
    • The path to autonomous driving
      • OEM Evolutionary Path
      • “Outsiders” revolutionary Path
    • Long-term potential winners & losers
  2. 9:45

    コネクテッドカーを利用したスマートモビリティ技術とサービスのビジョン

    Ralf-Peter Schäfer | Vice President Traffic and Travel Information Product Unit and Fellow of TomTom, Germany

    The presentation outlines the technical insights of TomTom’s traffic data analytics portfolio processing billions of probe based speed information day-to-day. With the introduction of TomTom’s historical and real-time traffic technologies IQ Routes and TomTom Traffic in 2007 the portfolio has been implemented in over 45 countries globally.

    The backbone of the technology is community data from navigation devices, fleet management solutions as well as GPS based smartphone applications for road users. Today, the entire community of TomTom traffic consists of more than 400 Mio. GPS enabled devices as a basis for big data analytics in mobility, planning, geo-marketing and transport management.

    The presentation will give insights into the big traffic data archive, statistical information and examples of how the traffic and digital map database can be used in different areas of traffic analytics as traffic information, traffic planning, traffic management, Geo-Marketing, smart mobility and connected services for road travellers.

  3. 10:15

    パネルディスカッション−コネクテッドカーと自律走行:自動車用イメージセンサーのエコシステムに関する長期的なビジョンに影響を及ぼす消費者と法規制面の問題をめぐる議論

    • Implementation steps to autonomous driving – what are the main issues that remain to be resolved in order for it to become a reality? 
    • To what extent will the concept of the car we know today need to change to accommodate new vision technologies?
    • Consumers - how will they buy into idea of autonomous car; what lessons are being learned from ADAS take-up?
    • Developing advanced decision-making algorithms that will allow autonomous vehicles to make safe driving decisions with and without human input

    Panellists include:

    Ian Riches, Director, Global Automotive Practice, Strategy Analytics, UK

    Ralf-Peter Schäfer, Vice President Traffic and Travel Information Product Unit, TomTom, Germany

ADAS処理システムの技術革新

画像処理の要求条件や標準に対応するイメージセンサーの実現

  1. 11:30

    前方カメラシステム

  2. 12:00

    セッションの詳細は未定です

    Allan McAuslin, Product Line Manager, Vision and Automated Drive, NXP Semiconductors

  3. 12:30

    セッションの詳細は未定です

  4. 13:00

    講演者を交えた昼食会

マルチモードセンサー技術とセンサーフュージョン

視覚センサーとレーダー、LiDAR (光検出と測距) 、超音波センサー、赤外線センサーを組み合わせたシステム−ADASと自律運転にとって重要なセンサーの種類、低レベルフュージョンと高レベルフュージョン

  1. 14:00

    高度に自動化された運転に対応するセンサーフュージョンとソフトウェアソリューション

    Mario Brumm | Co-Founder of Ibeo Automotive Systems, Germany

    The automotive industry will have to introduce more automation to the vehicle whilst considering human-factors aspects. The aim of the automation has to be a high usability and confidence of the human driver in the automated system as well as the possibility of fluent switching between manual and automatic driving giving the driver full control over the vehicle. This session explores:

    • The roadmap from driver assistance to highly-automated driving
    • Technological challenges in the functional development 
  2. 14:30

    ドライバーモニタリングシステムとジェスチャーインタフェースに対応する堅牢なマシンビジョンインタフェースと高効率ダイナミック車内走査照明技術の開発

    • TriLumina’s solid-state scanning laser system isolates the driver’s face, dynamically illuminating only the area of interest without the inefficiency, heat and red glow of LEDs
    • eyeSight's touch-free interface and in-car sensing solution provides safer driving experiences when interacting with automotive systems
    • eyeSight’s gesture and head tracking software directs the TriLumina Smart Illuminator to dynamically illuminate only specific areas in the field of view, minimizing power consumption, reducing noise and providing the most robust DMS solution on the market

    Speakers:

    David Abell, Chief Strategy Officer and Co-Founder, TriLumina Corp., USA

    Gideon Shmuel, CEO, eyeSight Mobile Technologies, USA

  3. 15:00

    単一光子アバランシェダイオード:ADAS用途に対応するToF方式の3D検出器

    Pierre-Yves Cattin | Co-founder of Fastree3D SA, Switzerland

  4. 15:30

    休憩

  5. 16:00

    HD-TVI規格の動画転送システム

    Dr Feng Kuo, CTO and Co-Founder, Techpoint Inc, USA

    The increasing use of video camera for vision and ADAS application in automobiles has prompted the development of a more robust HD video transport system, the High Definition Transport Video Interface (HD-TVI). It is capable of transmitting HD video (720p/1080p) through low cost coaxial cable/connector or unshielded twisted pair wires over 300m due to the low signal bandwidth used. This HD analogue video transmission system was developed based on the foundation of the widely adopted SD analogue video format for its robustness but without its weakness of video artefacts. The transmission is real-time with minimum latency and also supports the bi-directional data channel over the same cable. With the advantage of low EMI and the resilience against interference, it provides an alternative and low cost solution for HD video transmission in the automotive environment.

  6. 16:30

    セッションの詳細は未定です

  7. 17:00

    閉会の挨拶、カンファレンス閉幕

1日目 分科会A

分科会A:センサーとレンズの最新技術

性能、消費電力、品質、堅牢性、効率の改善を通じた現時点での各種重要課題への対応

Chair: David Sánchez Fernández, ADAS Technical Specialist, Jaguar Land Rover, UK

  1. 14:00

    自動車用レンズ−特徴と今後のトレンド

    Winwe Qiu | General Manager of Ningbo Sunny Automotive Optech Co., Ltd

  2. 14:30

    グローバルシャッター機能付きCMOSイメージセンサーの開発

    Mario Heid | General Manager OVT Europe of OmniVision

  3. 15:00

    BrightEye™ −先進運転支援システム (ADAS) 対応のビジョンシステム

    Dr Ofer David | CEO of BrightWay Vision

    Gated imaging has shown great potential in automotive imaging. Raw video imagery by day and night for obstacle detection functionalities will be presented, along with a comparison of gated and passive in harsh weather conditions. Simplified traffic sign localization in 3D will also be discussed. 

    Take this opportunity to see a demo of this exciting technology.

  4. 15:30

    休憩

  5. 16:00

    facetvision―かつてない領域へと踏み込む新たなカメラシステム

    Andreas Brückner | Head of Microoptical Imaging Systems Group of Fraunhofer IOF, Germany

    There is a constant trend for miniaturization of digital cameras which is mainly driven by mobile devices like smartphones but also extremely valuable for applications in automobile and machine vision. It pushes the shrinking of opto-electronic, electronic and optical components. While opto- and micro-electronics have made tremendous progress, the miniaturization of optics still struggles to keep up. The demands for higher image resolution and large aperture of the lens (both driven by smaller pixel size) conflict with the need for a short focal length and a simple, compact design in terms of miniaturization. Array cameras inspired by the smallest known vision systems in Nature – the compound eyes – offer a way out of the dilemma.

    The contribution provides an illustration of the fundamental limits of the miniaturization of digital imaging systems. It is shown that these limits can be at least partly overcome by the convergence of microelectronics, microoptics and image processing applied in array cameras. The basics about the wafer-level optics fabrication technology are presented in order to demonstrate its potential for high-precision, parallelized production and thus for reducing the production costs. Finally, the contribution gives examples of realized demonstrators for array imaging sensors and cameras of smallest size that are able to overcome the scaling limits of traditional optics and thus to go where no camera has gone before.

  6. 16:30

    車載ステレオカメラの将来像

    Dr Michael Chiu | Chief Technology Officer of Automation Engineering Incorporated

    • Critical attributes of stereo cameras and evolution into the future
    • Enabling processes and components for stereo cameras, including image sensors, manufacturing & test processes
  7. 17:00

    締め括りの討論と総括

  8. 17:45

    レセプション、車両のデモンストレーション

1日目 分科会B

分科会B:画像処理とコンピュータービジョン

  1. 14:00

    リアルタイムコンピュータービジョンに対応するヘテロジニアスコンピューティング

    Kari Pulli, Senior Principal Engineer, Imaging and Camera Technologies Group, Intel

    • Specialized accelerators can be orders of magnitude more efficient than general-purpose hardware, but they can be difficult to program
    • Recent standards such as OpenCL and OpenVX can make such accelerators easier to access
  2. 14:30

    自動車用途に特化したニューラルネットワーク

    Etienne Perot, Vision Research Engineer, Valeo

  3. 15:00

    セッションの詳細は未定です

    Mayank Mangla, ADAS Imaging Architect, Texas Instruments

  4. 15:30

    休憩

分科会B:ドライバーモニタリングシステム

  1. 16:00

    ジェスチャーコントロールとドライバーモニタリング:革新的な画像処理技術を自動車に応用するうえでの課題

    Sascha Klement | Managing Director, CTO of gestigon GmbH

    • Features and use cases for gesture control and driver monitoring
    • Technical requirements in automotive applications
    • KPIs for assessing the quality of such systems
    • Testing and quality assurance strategies 
    • Social challenges in the transition from innovative prototypes to automotive-grade series production
  2. 16:30

    ToF方式のドライバーモニタリングシステム:量産型コンポーネントをベースにしたセルフモニタリング技術

    Dr. Bernd Buxbaum | CEO/CTO, Founder of pmdtechnologies

    On semi-autonomous and autonomous driving, the driver awareness remains an important safety factor. For safe driver monitoring, ToF (Time-of-Flight) provides unique robustness features on a reasonable cost target.  

    • Integrated availability recognition: intrinsic amplitude value with each distance including saturation recognition
    • Real monitoring of whole sensor chain: reference channel in each measurement cycle
    • Long-term availability: robust availability, even on mechanical stress
    • Usage of mass market components only – all system components are proven in use
    • Moderate calculation effort for accurate distance information
    • One camera head for eye lid recognition and distance measurement

     

  3. 17:00

    未来の自動車に対応するドライバーモニタリングシステムのアルゴリズム開発

    Alexandru Drimbarean, Vice President Advanced Research, FotoNation Ireland

    Driver and passenger safety is one of the main concern for car makers. Driver’s undivided attention to the traffic is essential to avoid any serious accidents. The National Highway Traffic Safety Administration conservatively estimates that 100,000 police-reported crashes are the direct result of driver fatigue each year. This results in an estimated 1,550 deaths, 71,000 injuries, and $12.5 billion in monetary losses. These figures may be the tip of the iceberg, since it is currently difficult to attribute crashes to sleepiness. Also, distracted driving such as using a cell phone, texting and eating is the cause of 1 out of 5 crashes in the US. In 2012, 3,328 people were killed in crashes involving a distracted driver. Finally, the race toward semi-autonomous and fully-autonomous driving is accelerating the need to have a driver monitoring system is paramount since the vehicle needs to know what the driver state is before it gives back control.

    The presentation will begin by detailing the key aspects related to the image acquisition, processing and computer vision technologies with emphasis placed on eye and gaze detection as well as head location and orientation calculation. These are required to continually monitor and assess the driver state in order to detect conditions such as excessive drowsiness or lack of road attentiveness that could potentially lead to accidents.  Then a more advanced use case, namely Driver identification implemented using face recognition technology is introduced. This is becoming increasingly relevant for car personalization: seat adjustments, mirror adjustments mirror settings as cars are being used by multiple persons as well as for security particularly important for theft prevention. Finally we address implementation specifics, emphasizing the flexibility that software can offer in terms of where the camera could be located and how hardware acceleration could bring performance and thermal management advantages.

  4. 17:30

    締め括りの討論と総括

  5. 17:45

    レセプション、車両のデモンストレーション

* 不測の事態により、事前の予告なしにプログラムが変更される場合があります。