- Open Access
Detection, classification, and mapping of U.S. traffic signs using google street view images for roadway inventory management
© Balali et al. 2015
- Received: 25 July 2015
- Accepted: 20 October 2015
- Published: 2 November 2015
Maintaining an up-to-date record of the number, type, location, and condition of high-quantity low-cost roadway assets such as traffic signs is critical to transportation inventory management systems. While, databases such as Google Street View contain street-level images of all traffic signs and are updated regularly, their potential for creating an inventory databases has not been fully explored. The key benefit of such databases is that once traffic signs are detected, their geographic coordinates can also be derived and visualized within the same platform.
By leveraging Google Street View images, this paper presents a new system for creating inventories of traffic signs. Using computer vision method, traffic signs are detected and classified into four categories of regulatory, warning, stop, and yield signs by processing images extracted from Google Street View API. Considering the discriminative classification scores from all images that see a sign, the most probable location of each traffic sign is derived and shown on the Google Maps using a dynamic heat map. A data card containing information about location and type of each detected traffic sign is also created. Finally, several data mining interfaces are introduced that allow for better management of the traffic sign inventories.
The experiments conducted on 6.2 miles of I-57 and I-74 interstate highways in the U.S. –with an average accuracy of 94.63 % for sign classification– show the potential of the method to provide quick, inexpensive, and automatic access to asset inventory information.
Given the reliability in performance shown through experiments and because collecting information from Google Street View imagery is cost-effective, the proposed method has potential to deliver inventory information on traffic signs in a timely fashion and tie into the existing DOT inventory management systems. Such spatio-temporal representations provide DOTs with information on how different types of traffic signs degrade over time and further provides useful condition information necessary for predicting sign replacement plan.
- Traffic sign
- Roadway assets
- Inventory management system
- Detection and classification
- Data visualization
The fast pace of deterioration in existing infrastructure systems and limited funding available have motivated U.S. Departments of Transportation (DOTs) to prioritize rehabilitation or replacement of roadway assets based on their conditions. For bridge and pavement assets– which are high-cost and low-quantity assets– many state DOTs have already established asset management systems to track their inventory and conditions (Golparvar-Fard et al. 2012). For traffic assets, however, most state DOTs do not have a good statewide inventory and condition information because the traditional methods of collecting asset information are cost prohibitive and offset the benefit of having such information.
Replacing each sign rated as poor in the U.S. can cost up to a high of $75 ((TRIP) 2014; Moeur 2014). Considering current practices, at best the DOTs can only decide on costly alternatives such as completely replacing signs in a traffic zone or a road section without carefully filtering those which can still serve for another few additional years. The need for prioritizing the replacement of the existing traffic signs and the increasing demand for installing new ones have created a new demand for the DOTs to identify cost-effective methods that can efficiently and accurately track the total number, type, condition, and geographic location of every traffic sign.
To address the growing needs for complete inventories, many state and local agencies have proactively looked into videotaping roadway assets using inspection vehicles that are equipped with high resolution cameras and GPS (Global Positioning System). Roadway videos provide accurate visual information on inventory and condition of high-quantity and low-cost roadway assets. Sitting in front of the screens, the practitioners can visually detect and assess the condition of the assets based on their own experience and a condition assessment handbook. The location information is also extracted from the GPS tag of these images. Nevertheless, due to the high costs of manual assessments, the number of inspections with these vehicles is very limited. This results in a survey cycle of one year duration for critical roadways and many years of complete negligence for all other local and regional roads. The high-volume of the data that needs to be analyzed manually has an undoubted impact on the quality of the analysis. Hence, many critical decisions are made based on inaccurate or incomplete information, which ultimately affects the asset maintenance and rehabilitation process. Such accurate and safe video-based data collection and analysis method –if widely and repeatedly implemented– can streamline the process of data collection and can have significant cost savings for the DOTs (Hassanain et al. 2003; Rasdorf et al. 2009).
Capturing a comprehensive record is still not feasible. This is because current video-based inventory data collection methods do not typically involve videotaping local roadways and are not frequently updated.
Training computer vision methods requires large datasets of relevant traffic sign images which is not available. Due to the high rate of false positive and miss rates in current methods, condition assessment is still conducted manually on roadway videos.
Today, several online services collect street-level panoramic images on a truly massive scale. Examples include Google Street View, Microsoft street side, Mapjack, Everyescape, and Cyclomedia Globspotter. The availability of these databases offers the possibility to perform automated surveying of traffic signs (Balali et al. 2015; I. Creusen and Hazelhoff 2012) and address the current problems. In particular, using Google Street View images can reduce the number of redundant enterprise information systems that collect and manage traffic inventories. Applying computer vision methods to these large collections of images has potential to create the necessary inventories more efficiently. One has to keep in mind that beyond changes in illumination, clutter/occlusions, varying positions and orientations, the intra-class variability can challenge the task of automated traffic sign detection and classification.
Using these emerging and frequently updated Google Street View images, this paper presents an end-to-end system to detect and classify traffic signs and map their locations –together with type – on Google Maps. The proposed system has three key components: 1) an API (Application Programming Interface) that extracts location information using Google Street View platform, 2) a computer vision method that is capable of detecting and classifying multiple classes of traffic signs; and 3) a data mining method to characterize the data attributes related to clusters of traffic signs. In simple terms, the system outsources the task of data collection and in return provides an accurate geo-spatial localization of traffic signs along with useful information such as roadway number, city, state, zip-code, and type of traffic sign by visualizing them on the Google Map. It also provides automated inventory queries allowing professionals to spend less time searching for traffic signs, rather focus on the more important task of monitoring existing conditions. In the following, the related work for traffic sign inventory management is briefly reviewed. Next, the algorithms for predicting traffic sign patterns and identifying heat map are presented in detail. The developed system can be found at http://signvisu.azurewebsites.net/, and a companion video (Additional file 1) is also provided with the online version of this manuscript.
Existing roadway inventory data collection methods and related studies
Using GPS survey and conventional optical equipment to collect desired information in the field
Driving a vehicle along the roadway while automatically recording photos/videos, which can be examined later to extract information
(Ai and Tsai 2014; Ai and Tsai 2011; DeGray and Hancock 2002; X. Hu et al. 2004; Jeyapalan 2004; Jeyapalan and Jaselskis 2002; Maerz and McKenna 1999; Robyak and Orvets 2004; Tsai et al. 2009; K. C. Wang et al. 2010; Wu and Tsai 2006)
Integrated GPS/GIS mapping systems
Using an integrated GPS/GIS field data logger to record and store inventory information
Analyzing high resolution images taken from aircraft or satellites to identify and extract highway inventory information
(Veneziano et al. 2002)
Examples of state DOT road inventory programs
Photo log, integrated GPS/GIS mapping systems
Cable barriers, concrete barriers, culverts, culvert ends, ditches, drainage inlets, glare screens, guardrails, impact attenuators, miscellaneous fixed objects, pipe ends, pedestals, roadside slope, rock outcroppings, special-use barriers, supports, trees, tree groupings, walls
Integrated GPS/GIS mapping systems, field inventory
Guardrails, pipes, culverts, culvert ends, catch basins, impact attenuators
Photo log, integrated GPS/GIS mapping Systems
Wetland delineation, vegetation classification
Airborne LiDAR, aerial photography
Landscape, sloped areas, individual counts of trees, side slope, grade, contour
Tennessee Road Information Management System (TRIMS), Maintenance Management System (MMS)
Traffic signs, guardrails, and pavement markings which are manually collected.
Photo, Laser Scanner, and Virtual Reality System
Most types of visible highway assets except for light posts and road detectors
Web-based asset management system using Google Maps
Cross pipes, ditches
FHWA Baltimore-Washington Parkway
Point Cloud Software, GIS
Computer vision methods for traffic sign detection and classification
In recent years, several vision-based driver assistance systems capable of sign detection and classification (on a limited basis) have become commercially available. Nevertheless these systems do not benefit from Google Street View images for traffic sign detection and classification. This is because these systems need to perform in real-time and thus leverage high frame rate methods such as optical flow or edge detection methods are not applicable to the relatively coarse temporal resolutions available in Google Street View images (Salmen et al. 2012). Several recent studies have shown that Histograms of Oriented Gradients (HOG) and Haar wavelets could be more accurate alternatives for characterization of traffic signs in street level images (Hoferlin and Zimmermann 2009; Ruta et al. 2007; Wu and Tsai 2006). For example (Z. Hu and Tsai 2011; Prisacariu et al. 2010) characterize signs by combining edge and Haar-like features, and (Houben et al. 2013; Mathias et al. 2013; Overett et al. 2014) leverages HOG features. More recent studies such as (Balali and Golparvar-Fard 2015a; I. M. Creusen et al. 2010) augment HOG features with color histograms to leverage both texture/pattern and color information for sign characterizations. The selection of a machine learning method for sign classification is constrained to the choice of features. Cascaded classifiers are traditionally used with Haar-like features (Balali and Golparvar-Fard 2014; Prisacariu et al. 2010). Support Vector Machines (SVM) (I. M. Creusen et al. 2010; Jahangiri and Rakha 2014; Xie et al. 2009), neural networks, and cascaded classifiers trained with some type of boosting (Balali and Golparvar-Fard 2015a; Overett et al. 2014; Pettersson et al. 2008) are used for classification of traffic signs.
(Balali and Golparvar-Fard 2015a) benchmarked and compared the performance of the most relevant methods. Using a large visual dataset of traffic signs and their ground truth, they showed that the joint representation of texture and color in HOG + Color histograms with multiple linear SVM classifiers result in the best performance for classification of multiple categorizes of traffic signs. As a result, HOG + Color with linear SVM classifier are used in this paper. We briefly describe this method and modifications in method section. More detailed information on available techniques can be found in (Balali and Golparvar-Fard 2015a). One missing thread is that the scalability of these methods. Different from state-of-the-art, we do not make any assumption on the location of traffic signs in 2D image. Rather by sliding a window of fixed spatial ratio at multiple scales, candidates for traffic signs are detected from 2D Google Street View images. A key benefit here is that the detection and classification results from multiple overlapping images in Google Street View can be used for improving detection accuracy.
Data mining and visualization for roadway inventory management systems
In recent years many data mining and visualization methods are developed that analyze and map spatial data at multiple scales for roadway inventory management purposes (Ashouri Rad and Rahmandad 2013). Examples are predicting travel time (Nakata and Takeuchi 2004), managing traffic signals (Zamani et al. 2010), traffic incident detection (Jin et al. 2006), analyzing traffic accident frequency (Beshah and Hill 2010; Chang and Chen 2005), and integrated systems for traffic information intelligent analysis (Hauser and Scherer 2001; Kianfar and Edara 2013; Y.-J. Wang et al. 2009). (Li and Su 2014) developed a dynamic sign maintenance information system using Mobile Mapping System (MMS) for data collection. (Mogelmose et al. 2012) discussed the application of traffic sign analysis in intelligent driver assistance systems. (De la Escalera et al. 2003) also detected and classified traffic signs for intelligent vehicles. Using these tools, it is now possible to mine spatial data at multiple layers (i.e., CartoDB) (de la Torre 2013) or spatial and other data together (i.e., GeoTime for analyzing spatio-temporal data) (Kapler and Wright 2005). (I. Creusen and Hazelhoff 2012) visualized detected traffic signs on a 3D map based on GPS position of the images. (Zhang and Pazner 2004) presented an icon-based visualization technique designed for co-visualizing multiple layers of geospatial information. A common problem in visualization is that these methods require adding a large number of markers to a map which creates usability issues and the degraded performance of the map. It can be hard to make sense of a map that is crammed with markers (Svennerberg 2010).
The utility of a particular inventory technique depends on the type of features to be collected such as location, sign type, spatial measurement, and material property visual measurement. In all these cases the data is still collected and analyzed manually and thus inventory databases cannot be quickly or frequently updated. The current methods of data collection and analysis are field inventory methods, photo/video logs, integrated GPS/GIS mapping system, and aerial/satellite photography. However, applications for detection, classification, and localization of U.S. traffic signs in Google Street View Images have not been validated before. Overall there is a lack of automation in integrating data collection, analysis, and representation. In particular, creating and frequently updating traffic sign databases, the availability of techniques for mining, and spatio-temporal interaction with this data still require further research. In the following, a new system is introduced that has the potential to address current limitations.
Extracting location information using google street view API
Required parameters for Google Street View images API
Latitude and Longitude
Output size of the image in pixels.
2048 × 2048
Compass heading of camera
Horizontal field of view of the image
Up/down angle of the camera relative to the Street View vehicle
Detection and classification traffic signs using google street view images
In this paper we use HOG + Color with linear SVM classifier for detection and classification traffic signs since (Balali and Golparvar-Fard 2015a) showed this method has the best performance. Different from the state-of-the-art methods (Stallkamp et al. 2011; Tsai et al. 2009), we do not make any prior assumption on the 2D location of traffic signs in the images. Rather, using a multi-scale sliding window that visits the entirety of the image pixels, candidates are selected in each image and passed on to multiple binary discriminative classifiers to detect and classify the traffic signs. Thus the method independently processes each image, keeping the number of False Negatives (FN – the number of missed traffic signs) and False Positives (FP – the number of accepted background regions) low. It is assumed that each sign is visible from a minimum of three views. The sign detection is considered to be successful if detection boxes (from the sliding windows) in three consecutive images have a minimum overlap of 67 %. This constraint is enforced by warping the image after and before of each detection using homography transformation (Hartley and Zisserman 2003).
Characterizing detections with histogram of oriented gradients (HOG) + color
To account for the impact of the scale – i.e., the distance of the signs to the camera– and effectively classify the testing images with the HOG descriptors, a detection window slides over each image visiting all pixels at multiple spatial scales.
A histogram of local color distribution is also formed similar to the HOG descriptors to characterize local color distributions in each traffic sign. For each template window which contains a candidate traffic sign, the image patch is divided into dx × dy non-overlapping pixel regions. A similar procedure is followed to characterize color in each cell, resulting a histogram representation of local color distributions. To minimize the effect of varying brightness in images, hue and saturation color channels are chosen and values are ignored. Normalized hue and saturation colors are histogrammed and vector-quantized. These color histograms are then concateneated with HOG to form the HOG + Color descriptors.
Discriminative classification of the HOG + C descriptors per detection
To identify whether or not a detection window at a given scale contains a traffic sign, multiple SVM classifiers are used, each classifying the detection in a one-vs.-all scheme. Thus, each binary SVM decides whether the HOG + C descriptor belongs to a category of traffic signs or not. The score of mutliple classifiers is compared to identify which category of traffic signs best represents the detection (or simply not detecting the observation as traffic sign). As with any supervised learning mode, first each SVM is trained and then the classifiers are cross-validated. The trained models are used to classify new data (Burges 1998).
To effectively classify the testing candidates with the HOG descriptors, the detection windows—with a fixed aspect ratio—slide over each video frame at multiple spatial scales. In this paper, comparison across different scales is accomplished by rescaling each sliding window candidate and transforming the candidates to the spatial scale of each template traffic sign model. For detecting and classifying multiple categories of traffic signs, multiple independent one-against-all classifiers are leveraged, where each is trained to detect one traffic sign category of interest. Once these models are learned in the training process, the candidate windows are placed into these classifiers, and the label from the classifier, which results in the maximum classification score is returned.
Mining and spatial visualization of traffic sign data
The process of extracting traffic signs data including how True Positive (TP), False Positive (FP), and False Negative (FN) detections are handled is key to the quality of the developed inventory management system. Especially, with respect to missing attributes (FPs and FNs), it is necessary to decide whether to exclude all missing attributes from analysis. Because each sign is visible in multiple images, it is expected that the missed traffic signs (FNs) in some of the images will be successfully detected in the next sequence of images and as a result the rate if FNs would be very low. In the developed visualization, the most probable location of each detection is visualized on Google Map using a heat map. Hence, those locations that are falsely detected as signs (FPs) – which their likelihood of being falsely detected in multiple images is small- could be easily detected and filtered out. In other words, if the missing signs have specific pattern, the prediction of missing values would performed. The adopted strategy for dealing with FNs and FPs significantly lowers these rates (the experimental results validate this). In the following, the mechanisms provided to the users for data interaction are presented:
Structuring and mining comprehensive databases of detected traffic signs
Spatial visualization of traffic signs data
Each sign may appear and detect in multiple images– To derive the most probable location for this sign, the area of bounding box in each of these images is calculated. The image that has the highest overall back-projection area is chosen as the most probable location of the traffic sign. This is intuitive, because as the Google vehicle gets closer to the sign, the area of the bounding box containing the sign increases.
Multiple signs can be detected within a single image and thus, a single latitude and longitude can be assigned to mutiple signs. In these situations, the same as scenario 1 the size of the bounding boxes in images that see these signs is used to identify the most probable location for each of the traffic signs. To show that multiple signs are visible in one image, multiple markers are placed on the Google map.
To visualize these scenarios, the developed interface contains a static and a dynamic map. In the static map, all detections are marked thus multiple markers are placed when several signs are in proximity of one another. Detailed information about latitude/longitude, roadway number, city, state, zip-code, country, traffic sign type, and likelihood of each detected traffic sign are also shown by clicking on these markers.
Parameters of HOG + Color detectors
[−1; 0; +1]
Hue and Saturation
8 orientations in 0–180 °
Number of Bins
6 for each
L2 Normalization blocks
L2 Normalization blocks
Number of cells
Number of cells
Number of pixels
8 × 8
Number of Pixels
8 × 8
Linear SVM Classifiers with C = 1
Miss rate and accuracy per image of different types of traffic sign (total of 216 signs)
Accuracy per image
Detection and classification of all types of traffic signs. In this paper, the traffic signs were classified based on the signs’ message. There are more than 670 types of traffic signs specified in MUTCD (Manual on Uniform Traffic Control Devices) and developing and validating the proposed system that can detect all type of traffic signs associated with MUTCD code is left as future work.
Testing the proposed system on local streets and non-interstate highways. Since there are no Stop Signs and very limited Yield Signs on interstate highways, the validation of our proposed system for urban area is left as future work.
By leveraging Google Street View images, this paper presented a new system for creating comprehensive inventories of traffic signs. By processing images extracted from Google Street View API– using a computer vision method based on joint Histograms of Oriented Gradients and Color– traffic signs were detected and classified into four categories of regulatory, warning, stop, and yield signs. Considering the discriminative classification scores from all images that see a sign, the most probable location of each traffic sign was derived and shown on the Google Maps using a heat map. A data card containing information about location and typeof each detected traffic sign was also created. Finally, several data mining interfaces were introduced that allow for better management of the traffic sign inventories. Given the reliability in performance shown through experiments and because collecting information from Google Street View imagery is cost-effective, the proposed method has potential to deliver inventory information on traffic signs in a timely fashion and tie into the existing DOT inventory management systems. With the continuous growth and expansion of the roadway networks, the use of the proposed method will allow DOTs’ practitioners to accommodate the demands of the installation of new traffic sign and other assets, maintain existing signs, and perform future replacements in compliance with the Manual on Uniform Traffic Control Devices (MUTCD). The report cards which contain latitude/longitude, roadway number, type of traffic sign, and detection/classification score facilitate the review of specific sign information in a given location without searching through the large databases. Such spatio-temporal representations provide DOTs with information on how different types of traffic signs degrade over time and further provides useful condition information necessary for predicting sign replacement plan. The method can also automate the data collection process for ESRI ArcView GIS databases.
The authors would like to thank Illinois Department of Transportation for providing I-57 dataset of years 2013 and 2014. The work of the undergraduate students of RAAMAC Lab in developing the ground truth data is also appreciated.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- (TRIP), & N. T. R. G. (2014). Michigan transportation by the numbers: meeting the state’s need for safe and efficient mobility.Google Scholar
- Ai, C., & Tsai, Y. J. (2011). Hybrid active contour–incorporated sign detection algorithm. Journal of Computing in Civil Engineering, 26(1), 28–36.View ArticleGoogle Scholar
- Ai, C, & Tsai, Y (2014). Geometry preserving active polygon-incorporated sign detection algorithm. Journal of Computing in Civil Engineering. http://ascelibrary.org/doi/10.1061/%28ASCE%29CP.1943-5487.0000422.
- Ashouri Rad, A., & Rahmandad, H. (2013). Reconstructing online behaviors by effort minimization. In A. Greenberg, W. Kennedy, & N. Bos (Eds.), Social computing, behavioral-cultural modeling and prediction (Vol. 7812, pp. 75–82). Heidelberg: Springer Berlin. Lecture Notes in Computer Science.View ArticleGoogle Scholar
- Balali, V., & Golparvar-Fard, M. (2014). Video-based detection and classification of US traffic signs and mile markers using color candidate extraction and feature-based recognition. In Computing in civil and building engineering (pp. 858–866).Google Scholar
- Balali, V, & Golparvar-Fard, M (2015a). Evaluation of multi-class traffic sign detection and classification methods for U.S. roadway asset inventory management. ASCE Journal of Computing in Civil Engineering, 04015022. http://dx.doi.org/10.1061/(ASCE)CP.1943-5487.0000491.
- Balali, V., & Golparvar-Fard, M. (2015b). Recognition and 3D localization of traffic signs via image-based point cloud models. Austin: Paper presented at the International Workshop on Computing in Civil Engineering.View ArticleGoogle Scholar
- Balali, V., & Golparvar-Fard, M. (2015c). Segmentation and recognition of roadway assets from car-mounted camera video streams using a scalable non-parametric image parsing method. Automation in Construction, 49, 27–39.View ArticleGoogle Scholar
- Balali, V., Golparvar-Fard, M., & de la Garza, J. (2013). Video-based highway asset recognition and 3D localization. In Computing in civil engineering (pp. 379–386).View ArticleGoogle Scholar
- Balali, V., Depwe, E., & Golparvar-Fard, M. (2015). Multi-class traffic sign detection and classification using google street view images. Washington: Paper presented at the the 94th Transportation Research Board Annual Meeting (TRB).Google Scholar
- Beshah, T., & Hill, S. (2010). Mining road traffic accident data to improve safety: role of road-related factors on accident severity in Ethiopia (AAAI Spring Symposium: Artificial Intelligence for Development).Google Scholar
- Burges, C. J. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2), 121–167.View ArticleGoogle Scholar
- Caddell, R., Hammond, P., & Reinmuth, S. (2009). Roadside features inventory program (Washington State Department of Transportation).Google Scholar
- Chang, L.-Y., & Chen, W.-C. (2005). Data mining of tree-based models to analyze freeway accident frequency. Journal of Safety Research, 36(4), 365–375.MathSciNetView ArticleGoogle Scholar
- Creusen, I, & Hazelhoff, L (2012) A semi-automatic traffic sign detection, classification, and positioning system. In IS&T/SPIE Electronic Imaging, 2012 (pp. 83050Y-83050Y-83056): International Society for Optics and Photonics. doi:10.1117/12.908552.
- Creusen, IM, Wijnhoven, RG, Herbschleb, E, & De With, P (2010) Color exploitation in hog-based traffic sign detection. In Image Processing (ICIP), 2010 17th IEEE International Conference on, 2010 (pp. 2669–2672): IEEE. doi:10.1109/ICIP.2010.5651637.
- Dalal, N, & Triggs, B (2005) Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, 2005 (Vol. 1, pp. 886–893): IEEE. doi:10.1109/CVPR.2005.177.
- De la Escalera, A., Armingol, J. M., & Mata, M. (2003). Traffic sign recognition and analysis for intelligent vehicles. Image and Vision Computing, 21(3), 247–258.View ArticleGoogle Scholar
- de la Garza, J., Roca, I., & Sparrow, J. (2010). Visualization of failed highway assets through geocoded pictures in google earth and google maps. In Proceeding, CIB W078 27th International Conference on Applications of IT in the AEC Industry.Google Scholar
- de la Torre, J (2013) Organising geo-temporal data with CartoDB, an open source database on the cloud. In Biodiversity Informatics Horizons 2013. Google Scholar
- DeGray, J, & Hancock, KL (2002). Ground-based image and data acquisition systems for roadway inventories in New England: A synthesis of highway practice. New England Transportation Consortium, No. NETCR 30.Google Scholar
- Golparvar-Fard, M., Balali, V., & de la Garza, J. M. (2012). Segmentation and recognition of highway assets using image-based 3D point clouds and semantic Texton forests. Journal of Computing in Civil Engineering, 29(1), 04014023.View ArticleGoogle Scholar
- Gonzalez, H., Halevy, A. Y., Jensen, C. S., Langen, A., Madhavan, J., Shapley, R., et al. (2010). Google fusion tables: web-centered data management and collaboration. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data (pp. 1061–1066). New York: ACM.View ArticleGoogle Scholar
- Haas, K., & Hensing, D. (2005). Why your agency should consider asset management systems for roadway safety.Google Scholar
- Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.Google Scholar
- Hassanain, M. A., Froese, T. M., & Vanier, D. J. (2003). Framework Model for Asset Maintenance Management. Journal of Performance of Constructed Facilities, 17(1), 51–64. doi:10.1061/(ASCE)0887-3828(2003)17:1(51).View ArticleGoogle Scholar
- Hauser, T. A., & Scherer, W. T. (2001). Data mining tools for real-time traffic signal decision support & maintenance. In Systems, man, and cybernetics, 2001 IEEE International Conference on (Vol. 3, pp. 1471–1477). doi:10.1109/ICSMC.2001.973490.Google Scholar
- Hoferlin, B, & Zimmermann, K (2009) Towards reliable traffic sign recognition. In Intelligent Vehicles Symposium, 2009 IEEE, (pp. 324–329): IEEE. doi:10.1109/IVS.2009.5164298.
- Houben, S, Stallkamp, J, Salmen, J, Schlipsing, M, & Igel, C (2013) Detection of traffic signs in real-world images: the german traffic sign detection benchmark. In Neural Networks (IJCNN), The 2013 International Joint Conference on, (pp. 1–8): IEEE. doi:10.1109/IJCNN.2013.6706807.
- Hu, Z., & Tsai, Y. (2011). Generalized image recognition algorithm for sign inventory. Journal of Computing in Civil Engineering, 25(2), 149–158.View ArticleGoogle Scholar
- Hu, X., Tao, C. V., & Hu, Y. (2004). Automatic road extraction from dense urban area by integrated processing of high resolution imagery and lidar data. Istanbul: International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. 35, B3.Google Scholar
- Huang, YS, Le, YS, & Cheng, FH (2012) A method of detecting and recognizing speed-limit signs. In Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2012 Eighth International Conference on, (pp. 371–374): IEEE. doi:10.1109/IIH-MSP.2012.96.
- Jahangiri, A., & Rakha, H. (2014). Developing a Support Vector Machine (SVM) classifier for transportation mode identification by using mobile phone sensor data (p. 14-1442). Washington: Transportation Research Board 93rd Annual Meeting.Google Scholar
- Jalayer, M., Gong, J., Zhou, H., & Grinter, M. (2013). Evaluation of remote-sensing technologies for collecting roadside feature data to support highway safety manual implementation (p. 13-4709). Washington: Transportation Research Board 92nd Annual Meeting.Google Scholar
- Jeyapalan, K. (2004). Mobile digital cameras for as-built surveys of roadside features. Photogrammetric Engineering & Remote Sensing, 70(3), 301–312.View ArticleGoogle Scholar
- Jeyapalan, K., & Jaselskis, E. (2002). Technology transfer of as-built and preliminary surveys using GPS, soft photogrammetry, and video logging.Google Scholar
- Jin, Y, Dai, J, & Lu, CT (2006) Spatial-temporal data mining in traffic incident detection. In Proc. SIAM DM 2006 Workshop on Spatial Data Mining (Vol. 5): Citeseer.Google Scholar
- Jones, F. E. (2004). GPS-based Sign Inventory and Inspection Program. International Municipal Signal Association (IMSA) Journal, 42, 30–35.Google Scholar
- Kapler, T., & Wright, W. (2005). GeoTime information visualization. Information Visualization, 4(2), 136–146.View ArticleGoogle Scholar
- Khattak, A. J., Hummer, J. E., & Karimi, H. A. (2000). New and existing roadway inventory data acquisition methods. Journal of Transportation and Statistics, 3, 3.Google Scholar
- Kianfar, J., & Edara, P. (2013). A data mining approach to creating fundamental traffic flow diagram. Procedia - Social and Behavioral Sciences, 104(0), 430–439. http://dx.doi.org/10.1016/j.sbspro.2013.11.136.View ArticleGoogle Scholar
- Li, D., & Su, W. Y. (2014). Dynamic maintenance data mining of traffic sign based on mobile mapping system. Applied Mechanics and Materials, 455, 438–441.View ArticleGoogle Scholar
- Maerz, N. H., & McKenna, S. (1999). Mobile highway inventory and measurement system. Transportation Research Record: Journal of the Transportation Research Board, 1690(1), 135–142.View ArticleGoogle Scholar
- Mathias, M, Timofte, R, Benenson, R, & Van Gool, L (2013) Traffic sign recognition—How far are we from the solution? In Neural Networks (IJCNN), The 2013 International Joint Conference on, (pp. 1–8): IEEE. doi:10.1109/IJCNN.2013.6707049.
- Moeur, R. C. (2014). Manual of traffic signs. http://www.trafficsign.us/signcost.html. Accessed 12/19 2014.Google Scholar
- Mogelmose, A., Trivedi, M. M., & Moeslund, T. B. (2012). Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. Intelligent Transportation Systems, IEEE Transactions on, 13(4), 1484–1497.View ArticleGoogle Scholar
- Nakata, T, & Takeuchi, JI (2004) Mining traffic data from probe-car system for travel time prediction. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, (pp. 817–822): ACM. doi:10.1145/1014052.1016920.
- Overett, G, Tychsen-Smith, L, Petersson, L, Pettersson, N, & Andersson, L (2014). Creating robust high-throughput traffic sign detectors using centre-surround HOG statistics. Machine Vision and Applications, 1–14. doi:10.1007/s00138-011-0393-1.
- Pettersson, N, Petersson, L, & Andersson, L (2008) The histogram feature-a resource-efficient weak classifier. In Intelligent Vehicles Symposium, 2008 IEEE, (pp. 678–683): IEEE. doi:10.1109/IVS.2008.4621174.
- Prisacariu, VA, Timofte, R, Zimmermann, K, Reid, I, & Van Gool, L (2010) Integrating object detection with 3d tracking towards a better driver assistance system. In Pattern Recognition (ICPR), 2010 20th International Conference on, (pp. 3344–3347): IEEE. doi:10.1109/ICPR.2010.816.
- Rasdorf, W., Hummer, J. E., Harris, E. A., & Sitzabee, W. E. (2009). IT issues for the management of high-quantity, low-cost assets. Journal of Computing in Civil Engineering, 23(2), 91–99. doi:10.1061/(ASCE)0887-3801(2009)23:2(91).View ArticleGoogle Scholar
- Ravani, B., Dart, M., Hiremagalur, J., Lasky, T. A., & Tabib, S. (2009). Inventory and assessing conditions of roadside features statewide. California State Department of Transportation: Advanced Highway Maintenance and Construction Technology Research Center.Google Scholar
- Robyak, R., & Orvets, G. (2004). Video based Asset Data Collection at NJDOT. New Jersey: Department of Transportation.Google Scholar
- Ruta, A, Li, Y, & Liu, X (2007) Towards real-time traffic sign recognition by class-specific discriminative features. In BMVC, (pp. 1–10). doi:10.5244/C.21.24.
- Salmen, J, Houben, S, & Schlipsing, M (2012) Google Street View images support the development of vision-based driver assistance systems. In Intelligent Vehicles Symposium (IV), 2012 IEEE, (pp. 891–895). doi:10.1109/IVS.2012.6232195.
- Stallkamp, J, Schlipsing, M, Salmen, J, & Igel, C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In Neural Networks (IJCNN), The 2011 International Joint Conference on, (pp. 1453–1460): IEEE. doi:10.1109/IJCNN.2011.6033395.
- Svennerberg, G (2010). Dealing with massive numbers of markers. In M Wade, C Andres, S Anglin, M Beckner, E Buckingham, G Cornell, et al. (Eds.), Beginning Google Maps API 3 (pp. 177–210): Apress. doi:10.1007/978-1-4302-2803-5.
- Tsai, Y., Kim, P., & Wang, Z. (2009). Generalized traffic sign detection model for developing a sign inventory. Journal of Computing in Civil Engineering, 23(5), 266–276.View ArticleGoogle Scholar
- Veneziano, D, Hallmark, SL, Souleyrette, RR, & Mantravadi, K (2002) Evaluating Remotely Sensed Images for Use in Inventorying Roadway Features. In Applications of Advanced Technologies in Transportation (2002), (pp. 378–385): ASCE. doi:10.1061/40632(245)48.
- Wang, YJ, Yu, ZC, He, SB, Cheng, JL, & Zhang, ZJ (2009) A data-mining-based study on road traffic information analysis and decision support. In Web Mining and Web-based Application, 2009. WMWA ‘09. Second Pacific-Asia Conference on, (pp. 24–27). doi:10.1109/WMWA.2009.58.
- Wang, K. C., Hou, Z., & Gong, W. (2010). Automated road sign inventory system based on stereo vision and tracking. Computer‐Aided Civil and Infrastructure Engineering, 25(6), 468–477.View ArticleGoogle Scholar
- Wu, J., & Tsai, Y. (2006). Enhanced roadway geometry data collection using an effective video log image-processing algorithm. Transportation Research Record: Journal of the Transportation Research Board, 1972(1), 133–140.View ArticleGoogle Scholar
- Xie, Y, Liu, LF, Li, CH, & Qu, YY (2009) Unifying visual saliency with HOG feature learning for traffic sign detection. In Intelligent Vehicles Symposium, 2009 IEEE, (pp. 24–29): IEEE. doi:10.1109/IVS.2009.5164247.
- Zamani, Z, Pourmand, M, & Saraee, MH (2010) Application of data mining in traffic management: case of city of Isfahan. In Electronic Computer Technology (ICECT), 2010 International Conference on, (pp. 102–106): IEEE. doi:10.1109/ICECTECH.2010.5479977.
- Zhang, X., & Pazner, M. (2004). The icon imagemap technique for multivariate geospatial data visualization: approach and software system. Cartography and Geographic Information Science, 31(1), 29–41.View ArticleGoogle Scholar
- Zhou, H., Jalayer, M., Gong, J., Hu, S., & Grinter, M. (2013). Investigation of methods and approaches for collecting and recording highway inventory data.Google Scholar