Detecting levels of driver impairment: 3. An intelligent steering wheel that monitors the heart rate to detect potential drowsiness of the driver. This is made by embedding the hand-grip heart rate monitor into the steering wheel of the vehicle. The Driving style is simply analyzed by computational methodologies Artificial Intelligence and applied computing of transportation.
Fuzzy Logic Model-A branch of Artificial Intelligence AI , which will characterize the uncertainty in the data by adding truth and false concepts from common logic to a machine-generated model. Sudden Accelerations or Decelerations 2.
Sudden Braking 3. Sharp Turns 4. Set of events like start, stop, speed and turns 5. Maximum and minimum rpm of the engine 6. Number of Red light Jumps 7. Number of Tailgating cases 8. Number of Aggressive Honking 9. Number of Wrong side Overtaking. Fuzzification: This stage defines the membership functions and linguistic variables of the inputs. Rules Evaluation: In this stage, we will apply the fuzzy logic rules to calculate the output.
Because the speed limits will be different. Our solution is a combination of three different approaches, which increases accuracy. This is a more intuitive use of the new generation of driver assistance functions.
Most solutions being tested are based on just image processing. By exchanging data with other devices on the Internet of things IoT , active safety modules can assist drivers in making decisions based on the overall traffic status and replace the traffic lights for adaptive scheduling of vehicles at intersections [ 14 ].
Active safety modules can also estimate the risk of current driving behaviors by analyzing dynamic information from nearby vehicles via telecommunication service and cloud computing. If the risk is high and might cause a collision, the vehicle can warn the driver to correct the driving behavior, and in urgent cases, the active safety modules can take over the control of the vehicle to avoid a traffic accident [ 15 ].
The latest active safety modules have achieved the identification of traffic signs by applying deep machine learning technology. As a result, a vehicle could recognize a traffic warning or limitation and remind the driver not to violate the traffic rules [ 16 ]. In response to the need for intelligent transportation, ADAS research has focused on autopilots, with many countries especially the US, Japan, and some European countries investing a lot of money and effort into their development and making outstanding achievements [ 17 ].
Vehicular ad hoc network VANET technology, which provides channels for collecting real-time traffic information and scheduled vehicle crossings in the intersection zones, offers a new approach to releasing traffic pressure when traditional governance cannot solve the congestion issue effectively. It reduces the average vehicle waiting time and improves traveling efficiency and safety by gathering proper traffic-related data and optimizing scheduling algorithms [ 18 — 20 ].
Traffic-sign recognition function, which includes traffic-sign detection and traffic-sign classification, has been developed to solve this issue via machine vision technology. Since the camera-captured images include a lot of useless information, sliding window technology has been used to locate the traffic sign region in the image.
Then, certain algorithms, such as histogram of oriented gradient HOG , support vector machine SVM , random forest, and convolutional neural network CNN , are used for feature detection and classification [ 21 — 23 ].
With the sliding window technology being rather time-consuming, some researchers have proposed other solutions for locating traffic regions i. One of the most important functions of ADAS is collision avoidance, where warning technology senses potential accident risks based on certain factors, such as vehicle speed, space between vehicles, and so on [ 22 ]. One obvious challenge, however, is that space information may be missing in certain blind spots that sensors cannot detect [ 23 ].
Since then, collision avoidance warning has begun not only to be analyzed via passive measurements but also collected by active communication for its status data on the nearby vehicles [ 25 ]. Even though many different measures have been used in danger detection, one issue remains challenging. Colorful traffic cones that temporarily mark roads for road maintenance control or accident field protection are often hard to detect and process by the space sensors due to their small size.
If neither the driver nor the ADAS notices the traffic cones on the road, serious human injuries and property damage may occur. Some fruitful research in detecting traffic cones has been conducted using cameras and LiDAR sensors, using such technologies as machine vision, image processing, and machine learning [ 26 — 28 ]. However, some problems have become noticeable. First, high-quality sensors like LiDAR are expensive, and manufacturers are not willing to install them without a sharp cost decrease.
Second, machine learning technology requires a lot of system resources, and on-board computers are not sufficient. Thus, the overall objective of this study was to develop a cost-effective machine vision system that can automatically detect road traffic cones based on the cone distribution to avoid any potential accidents. This method was able not only to recognize traffic cones on the roads but to sense their distance and assist the automatic vehicle control in navigating them smoothly.
This required the development of algorithms for quick recognition of traffic cones by color and for sensing the corresponding distance data. The embedded computer, which worked as the brain of the car, not only controlled the machine vision system to capture the road images but also sent appropriate commands to VCU after processing the road images and analyzing the car status. VCU performed as a bridge between the embedded computer and the hardware onboard.
VCU collected real-time status data of the car, sending it to the embedded computer. At the same time, it controlled BMS, the DC motor controller, and the brake controller as they issued valid commands from the embedded computer.
For safety reasons, the VCU rejected any invalid commands or any commands received in the presence of a component error. The red and blue traffic cones were used for indicating the left and right edges of a temporary road, while the yellow ones specified the start and end of a road in this experiment.
Figure 3 shows the Smart Eye B1 camera system consisting of four cameras chosen for this research. Two monochrome cameras, which composed a stereo vision system, were used for sensing real-time 3-dimensional environment data, whereas the color cameras were detecting color information. Additionally, this camera system can automatically adjust white balance. The resolution for all cameras was set to , and the frequency of all cameras was set to 12 fps.
Two independent Ethernets with a megabit bandwidth controlled the data exchange for the monochrome and color cameras. The example images are shown in Figure 4 a. In this experiment, two monochrome cameras were used to build a stereo vision. A point P x , y , z in a world coordinate system projected into the two cameras with the coordinates P left xl , yl , zl and P right xr , yr , zr. Since the height of the two cameras was the same, the values of yl and yr were the same and the 3-dimensional coordinate could be changed into a 2-dimensional coordinate for analysis, as shown in Figure 5.
According to the triangle similarity law, the following relation exists:. From equation 1 , the x , y , and z values can be calculated with the following equations:. A processed depth image is presented in Figure 4 b , with the warmer color indicating the longer distance.
The original depth image format was converted from the 32 bit floating matrix to a color image because the float data and pixel values exceeded and were unavailable for display on the current operating system. All traffic cones had the same shape, size, and reflective stripes, except for their color. Since the differences between the yellow, red, and blue colors were obvious, they were able to distinguish from the color images by processing these images during the day time. The color detection algorithm is shown in equation 3.
The red, green, and blue values in each pixel of the color image H x , y were used for ratio calculations that would determine this pixel color feature.
The thresholds from T1 to T7 were set based on the experimental results:. Since various objects showed up in the color images with colors similar to those of the traffic cones, it was necessary to eliminate those as noise. Because the traffic cone size was in reverse proportion to the distance in the images, filtering of the fake traffic cone pixels was conducted based on the size S and average distance data D , as shown in equation 5.
A traffic cone was ignored unless S was less than the threshold at distance D , and it was confirmed if S was equal to or larger than the threshold at D. Finally, minimal external rectangles were calculated to mark all of the existing traffic cones in the area as the detected traffic cones:. The experiment was separated into a color marking test and a distance matching test. The color marking test was mainly focused on the traffic cone recognition, whereas the distance matching test validated the space measuring function.
Twenty red traffic cones, fourteen blue cones, and sixteen yellow cones were manually placed in front of the experiment car. As shown in Figure 6 , recognized traffic cones were marked by rectangles with the same colors as the bodies of the cones, whereas the unrecognized ones were marked with white rectangles. The three undetected red traffic cones were located close to the left and right edges of the image and placed on a section of the playground that was reddish in color.
Also, one of them was 10 meters away from the camera, and two were over twenty meters away from the camera. The ground color might have influenced red color recognition. After the traffic cone marking process, the distance data matching test was conducted, and the experiment results are shown in Figure 7.
However, only 15 out of 20 red traffic cones had the corresponding distance data in the pixel area of the depth image. The Intelligent Driving system collects data on speed, location, topography, traffic light patterns, congestion, and more. This data is then funneled from the cloud into a propulsion controller that actively adjusts vehicle speed, energy balance and performance to help the vehicle make smarter, energy-saving decisions. It also schedules a variety of messages to keep the driver informed, which includes information relating to incoming text message or calls.
When applied to hybrid electric vehicles, Intelligent Driving technology has the potential to improve efficiency even further as new driver assistance technologies emerge, across all propulsion systems, including internal combustion, hybrid or full electric. It lets the vehicle hold a speed but adjusts to changing traffic conditions with automatic braking and acceleration. ACC reduces the number of sudden accelerations and decelerations, enables speed synchronization among vehicles, and encourages smooth lane change behaviors, and reduce accident possibility.
When situation is out of hand the vehicle is decelerated and stopped in worst conditions for the safety of the driver. Lane departure warning alerts the driver when the car begins to leave its lane without obvious input from the driver for instance, when the driver is distracted or is very tired. A video camera in the rear view mirror allows the electronics to track the lane markings on the road ahead. If the car begins to drift off track for no apparent reason, the driver is alerted by an audio or haptic warning such as a vibrating the seat and match all the different available information and then decide to display on HMI.
The assistant indicates to the driver via the central display whether the gap is big enough for the car, a tight squeeze, or simply too small. The same visual and acoustic signals that feature in the parking assistant then warn the driver of any obstacles. Some of the dangers that sensors can pick up include how close the vehicle is to other vehicles surrounding it, how much its speed needs to be reduced while going around a curve, and how close the vehicle is going off the road.
0コメント