RM新时代网站-首页

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

基于ACP平行視覺理論的車道線檢測系統(tǒng)設(shè)計(jì)

ml8z_IV_Technol ? 來源:未知 ? 作者:胡薇 ? 2018-05-14 10:09 ? 次閱讀

過去,車道線檢測性能多依賴于人工視覺驗(yàn)證的方法。然而這種方法不能客觀量化車道線檢測系統(tǒng)的性能。同時(shí),由于車道線檢測系統(tǒng)的復(fù)雜性,不同的硬件算法,不同的數(shù)據(jù)采集方式和采集環(huán)境(天氣,道路)等,都會影響測試結(jié)果。因此,目前尚沒有一種統(tǒng)一的車道線評價(jià)方法。本文介紹了一種基于ACP平行視覺理論的車道線檢測系統(tǒng)設(shè)計(jì),將有效地解決車道線性能評價(jià)和測試問題,實(shí)現(xiàn)精確且穩(wěn)定的車道線檢測。

1引言

研究發(fā)現(xiàn),交通事故大多由駕駛員人為因素造成,例如駕駛員注意力不集中,錯誤判斷與執(zhí)行[1]。車道線檢測技術(shù)是高級駕駛員輔助系統(tǒng)(ADAS)中至關(guān)重要的功能,并促成了自動車道偏移預(yù)警與車道保持等系統(tǒng)[2]-[3]。車道線的檢測精度與穩(wěn)定性是車道線檢測技術(shù)的兩個(gè)重要性能指標(biāo)。車道線檢測系統(tǒng)應(yīng)該具有評估檢測結(jié)果并識別不合理檢測的能力[4]-[5]。對于傳統(tǒng)車來說,當(dāng)發(fā)現(xiàn)不合理的車道線檢測結(jié)果應(yīng)及時(shí)示意駕駛員注意當(dāng)前路況。對于具有ADAS或自動駕駛功能的汽車,汽車要負(fù)責(zé)對檢測結(jié)果進(jìn)行評估,應(yīng)保證在沒有駕駛員參與的情況下做出安全的行駛策略。

如何提高車道線檢測技術(shù)的可靠性以應(yīng)對復(fù)雜多變的行車環(huán)境是目前所面臨重要的挑戰(zhàn)。采用數(shù)據(jù)融合與功能融合可以有效構(gòu)造準(zhǔn)確且穩(wěn)定的車道線檢測系統(tǒng)。功能融合是指將多個(gè)檢測功能相結(jié)合,如將道路可行區(qū)域檢測與車輛檢測技術(shù)融合進(jìn)來。數(shù)據(jù)融合利用激光雷達(dá),GPS等設(shè)備來彌補(bǔ)攝像頭的不足,進(jìn)而提高車道線檢測的精度及穩(wěn)定性[4]。同時(shí),本文在論述了多種傳統(tǒng)車道線檢測系統(tǒng)的評價(jià)方法之后。針對現(xiàn)有方法在性能與評估方面的不足,提出一種基于ACP平行視覺理論的平行車道線檢測方法。平行車道線檢測方法利用人工平行系統(tǒng)提供海量數(shù)據(jù),將有效彌補(bǔ)傳統(tǒng)車道線檢測算法因數(shù)據(jù)不足所造成的無法充分訓(xùn)練與評估的缺陷。

2基于視覺的車道線算法概述

2.1車道線檢測基本過程

基于視覺的車道線檢測技術(shù)主要包含圖像預(yù)處理,車道線檢測與追蹤三個(gè)過程,如圖1。最常見的圖像預(yù)處理技術(shù)有感興趣區(qū)域提取,消失點(diǎn)檢測,圖像灰度化,噪聲處理,逆透視變換,圖像分割和邊緣檢測等。車道線特征主要包括其顏色和邊緣等信息。當(dāng)車道線被識別和建模后,為了提高車道線的實(shí)時(shí)檢測精度和穩(wěn)定性,車道線模型參數(shù)可以利用跟蹤算法進(jìn)行濾波以提高車道線的檢測精度和穩(wěn)定性。

圖1.基本車道線檢測過程

2.2傳統(tǒng)車道線檢測算法

基于視覺的車道線檢測技術(shù)可分為兩類:基于特征的檢測方法[9]-[19]與基于模型的檢測方法[20]-[29]?;谔卣鞯能嚨谰€檢測算法利用車道線的顏色、紋理與邊緣等特征進(jìn)行檢測。文獻(xiàn)[10]利用車道線像素點(diǎn)強(qiáng)度與邊緣特征,通過自適應(yīng)閾值的方法檢測車道線。文獻(xiàn)[11]利用車道線的空間特征和霍夫變換進(jìn)行車道線檢測。文獻(xiàn)[12]利用粒子濾波器識別車道線。文中指出,嚴(yán)格的車道線模型在實(shí)際運(yùn)用中難以滿足穩(wěn)定性要求。基于粒子濾波的車道線檢測算法無需對車道線進(jìn)行精確建模,只需要通過車道線特征跟蹤即可獲得良好效果。文獻(xiàn)[13]將RGB彩色圖片變換為YUV格式,利用車道線邊緣與車道線寬度進(jìn)行車道線檢測。文獻(xiàn)[14]通過將彩色圖片變換到HSV格式以增強(qiáng)車道線色彩的對比度,進(jìn)而根據(jù)像素點(diǎn)強(qiáng)度完成車道線檢測。文獻(xiàn)[16]給出一種基于頻域特征的車道線檢測方法。綜上所述,基于特征的車道線檢測算法更加直接,也較為簡便,適合車道線清晰的場景。然而,基于特征的車道線檢測算法難以應(yīng)對車道線復(fù)雜或可視條件不好的場景下。

基于模型的車道線檢測算法通常將將車道線假設(shè)為直線模型,拋物線模型,或高階曲線模型。除此之外還需要對道路的假設(shè),例如道路應(yīng)該是平坦且連續(xù)的。文獻(xiàn)[21]提出一種可以擬合任意形狀車道線的B-Snake模型。文獻(xiàn)[22]將該模型進(jìn)一步改進(jìn),變?yōu)橐环N平行snake模型。文獻(xiàn)[23]將車道線模型分為兩部分,近視野端為直線模型,遠(yuǎn)視野端曲線模型利用B-snake擬合。[25]提出一種基于霍夫變換,RANSAC以及B-Spline的集成車道線檢測方法。首先利用霍夫變換粗略檢測,之后再利用RANSAC和B-spline模型進(jìn)一步擬合。文獻(xiàn)[33]給出了一種基于RANSAC模型擬合的多段車道線自動切換建模的方法。基于模型的車道線檢測方法通常比基于特征的方法更加穩(wěn)定和精確,同時(shí)利用濾波算法估計(jì)模型參數(shù)也更加簡便。但是,基于模型的方法通常也需要更多的計(jì)算需求來擬合出模型參數(shù)。

2.3基于機(jī)器學(xué)習(xí)的車道線檢測算法

近年來,深度學(xué)習(xí)技術(shù)被廣泛應(yīng)用于圖像識別、分割、與物體檢測。文獻(xiàn)[36]指出,深度卷積網(wǎng)絡(luò)可以顯著的將車道線檢測精度提高到90%以上。文獻(xiàn)[37]提出一種基于深度卷積網(wǎng)絡(luò)與循環(huán)神經(jīng)網(wǎng)絡(luò)的車道線檢測方法。卷積神經(jīng)網(wǎng)絡(luò)負(fù)責(zé)判斷每副圖片是否包含車道線,并輸出車道線的位置與走向。循環(huán)神經(jīng)網(wǎng)絡(luò)負(fù)責(zé)識別視頻中的車道線結(jié)構(gòu),可以有效識別出被周圍車輛或物體所遮蓋的車道線。文獻(xiàn)[38]利用卷積神經(jīng)網(wǎng)絡(luò)對來自兩個(gè)側(cè)視攝像頭的圖像進(jìn)行處理,利用真實(shí)圖像與合成圖像訓(xùn)練端到端的車道線檢測算法。文獻(xiàn)[41]采用前視與俯視圖像結(jié)合的方法,將兩類圖像利用不同的卷積網(wǎng)絡(luò)分別處理,最后利用全局策略依據(jù)車道線物理特性做出預(yù)測。除此之外,研究人員也開發(fā)了基于進(jìn)化和啟發(fā)式算法的車道線搜索方法。文獻(xiàn)[42]提出一種受駕駛員行為啟發(fā)的置信度網(wǎng)路和多機(jī)構(gòu)檢測模型。文獻(xiàn)[43]提出基于蟻群算法的最優(yōu)化車道線搜索方法。文獻(xiàn)[44]給出了一種基于隨機(jī)游走模型的多車道檢測方法。使用基于馬爾科夫概率矩陣的有向隨機(jī)游走算法連接候選車道線特征。綜上所述,基于機(jī)器學(xué)習(xí)與智能算法的車道線檢測算法已經(jīng)體現(xiàn)出較傳統(tǒng)方法更強(qiáng)大的性能。雖然基于學(xué)習(xí)的方法對車載控制器的計(jì)算性能要求更多,但是隨著硬件系統(tǒng)的不斷升級,基于機(jī)器學(xué)習(xí)的檢測算法將會因其強(qiáng)大的計(jì)算能力而成為主要的車道線檢測方法。

3車道線檢測的集成方法

3.1集成方法概述

車道線檢測系統(tǒng)的穩(wěn)定性和適應(yīng)性是真正制約車道線系統(tǒng)應(yīng)用的核心問題。對于汽車企業(yè)來說,單一的外部環(huán)境傳感器不足以提供安全有效的環(huán)境感知。像Tesla, Mobileye和Delphi公司等都采用傳感器融合的方法(攝像頭,激光雷達(dá),毫米波雷達(dá)等)來提高車輛對周圍環(huán)境感知的能力。本文回顧了傳統(tǒng)車道線集成的方法,并將其分為算法層集成,系統(tǒng)級集成以及傳感器層集成,如圖2。

3.2算法集成方式

傳統(tǒng)算法層集成主要有串行和并行兩種結(jié)構(gòu)。串行集成方法較為常見[20][21][25]。如文獻(xiàn)[25]將霍夫變換,RANSAC算法和模型擬合依次用于車道線檢測,逐步提高檢測精度。另外,許多文獻(xiàn)也使用在車道線檢測模塊之后加入跟蹤算法的串行結(jié)構(gòu)提高車道線檢測精度[21][22][45]-[47]。文獻(xiàn)[50][51]給出了并行車道線檢測的方法。文獻(xiàn)[50]提出將兩個(gè)相對獨(dú)立且方法不同的車道線檢測算法并行運(yùn)行。通過對比兩個(gè)檢測結(jié)果判斷合理的車道線位置。如果兩種不同算法給出了相似的結(jié)果則視為當(dāng)前車道線檢測合理。相較而言,雖然并行集成引入冗余算法提高了檢測精度,但是提高了系統(tǒng)運(yùn)算量,降低了系統(tǒng)的實(shí)時(shí)性。

3.3系統(tǒng)集成方式

現(xiàn)實(shí)道路中的障礙物很有可能影響車道線檢測精度。例如,護(hù)欄就因具有較強(qiáng)的類車道線特征極易造成車道線的誤檢測[54]-[56]。因此,將車道線檢測系統(tǒng)與其他障礙物檢測有機(jī)結(jié)合有利于提高車輛的整體環(huán)境感知能力。臨近車輛也會因?yàn)橄嗨频念伾?、遮擋或陰影問題帶來車道線誤檢測。文獻(xiàn)[30][57]-[60]指出,前車檢測有利于區(qū)分車道線與車輛陰影以及降低車輛遮擋影響,可以提高車道線檢測精度。道路標(biāo)志與道路可行區(qū)域檢測也可以提高車道線檢測精度[4][7][66]-[68]。Tesla與Mobileye也都提出道路識別可以增強(qiáng)車道線檢測的穩(wěn)定性[69][70]。通常道路檢測先于車道線檢測,準(zhǔn)確的道路檢測可以優(yōu)化感興趣區(qū)域的選擇,提高車道線檢測效率。另外,因?yàn)榈缆愤吔缗c車道線有相同走向,道路檢測可以輔助車道線置信度評估系統(tǒng)完成對所檢測車道線的驗(yàn)證。

3.3傳感器集成方式

傳感器融合方法可以最大程度上提高車道線檢測系統(tǒng)的精度和穩(wěn)定性。文獻(xiàn)[76]通過RADAR檢測周圍車輛來精確劃分車輛邊緣像素獲得只含有車道線的道路圖片。文獻(xiàn)[77][78]結(jié)合GPS 和道路圖像,利用GPS獲得道路形狀,邊緣和走向以優(yōu)化車道線檢測算法。激光雷達(dá)具有高精度,大范圍的環(huán)境感知能力,因此,將激光雷達(dá)與攝像頭結(jié)合,可以彌補(bǔ)攝像頭系統(tǒng)的不足。激光雷達(dá)可通過道路與車道線標(biāo)志的不同反射效應(yīng)來獲得車道線位置[88]。文獻(xiàn)[89]利用激光雷達(dá)探測前方障礙物以獲得精確的可行區(qū)域作為車道線檢測的依據(jù)。文獻(xiàn)[56]給出一種基于多攝像頭與激光雷達(dá)融合的方法檢測城市道路車道線。雖然傳感器融合的方法比以上兩種方法更加精確,然而,傳感器之間需要復(fù)雜的標(biāo)定過程,同時(shí),硬件系統(tǒng)的增加也提高了系統(tǒng)成本。

圖2. 車道線檢測的集成方法

4車道線檢測的評價(jià)方法

過去,檢測性能多依賴于人工視覺驗(yàn)證的方法。然而這種方法不能客觀量化車道線檢測系統(tǒng)的性能。同時(shí),由于車道線檢測系統(tǒng)的復(fù)雜性,不同的硬件與算法,不同的數(shù)據(jù)采集方式和采集環(huán)境(天氣,道路)等,都會影響測試結(jié)果。因此,目前尚沒有一種統(tǒng)一的車道線評價(jià)方法。本節(jié)探討了影響車道線系統(tǒng)精度的各種因素,并總結(jié)了車道線檢測系統(tǒng)評估框架,如圖3。將車道線評價(jià)算法分為兩類在線評價(jià)和離線評價(jià)兩類。

圖3. 車道線系統(tǒng)評價(jià)體系

4.1車道線系統(tǒng)精度的影響因素

車道線檢測系統(tǒng)精度通常受限于多種因素。在高速道路上精確的車道線系統(tǒng)不能保證在市區(qū)環(huán)境也有效,因?yàn)槭袇^(qū)交通狀況和車道線標(biāo)識更加復(fù)雜。因此車道線系統(tǒng)性能需要綜合評價(jià),而不是只考慮某一指標(biāo)。如表1,車道線評價(jià)系統(tǒng)需要考慮盡可能多的影響因素。最理想的檢測方式是采用統(tǒng)一的檢測平臺和量化指標(biāo),然而這在現(xiàn)實(shí)情況下難以實(shí)現(xiàn)。

4.2離線檢測方法

基于圖片和視頻數(shù)據(jù)的離線檢測是常用的車道線檢測方法。著名的開放數(shù)據(jù)集有KITTI和Caltech Road[7][25]。圖像數(shù)據(jù)集更易發(fā)布,但需要在每張圖片上人工標(biāo)注出車道線位置。人工標(biāo)注需要消耗大量的時(shí)間,不適合大規(guī)模數(shù)據(jù)集。同時(shí),圖片無法有效反應(yīng)行車環(huán)境與綜合衡量車道線算法?;谝曨l數(shù)據(jù)的評價(jià)方法則更能反應(yīng)真實(shí)的行車環(huán)境和算法性能。然而這也顯著提升了數(shù)據(jù)標(biāo)注的難度。對此,文獻(xiàn)[95]給出一種半自動視頻標(biāo)注的方法,截取每幀中固定的幾行,按照時(shí)間順序和行序連成時(shí)序圖片,通過在時(shí)序圖片上標(biāo)注車道線像素點(diǎn)較為準(zhǔn)確的還原視頻中車道線的位置。學(xué)者們還提出基于人工生成場景的車道線評估方法[28][56],通過仿真軟件自動產(chǎn)生帶有標(biāo)注的類似真實(shí)道路環(huán)境的圖像。

4.3在線評價(jià)方法

車道線在線評價(jià)的方法通常通過融合其他檢測系統(tǒng)或傳感器來綜合評價(jià)車道線檢測結(jié)果的置信度。通過道路檢測得到的道路幾何信息有利于實(shí)時(shí)檢測車道線的合理性。文獻(xiàn)[96]提出一種實(shí)時(shí)車道線檢測算法,該算法采用車道線斜率,道路寬度和消失點(diǎn)三個(gè)指標(biāo)計(jì)算車道線檢測結(jié)果的可信度。文獻(xiàn)[5]利用車輛側(cè)方安裝攝像頭的方法,為車道線檢測結(jié)果提供真實(shí)的參考位置。此外,文獻(xiàn)[56]利用攝像頭與激光雷達(dá)建立車道線檢測置信度概率網(wǎng)絡(luò)。文獻(xiàn)[56][77]提出利用GPS,Lidar和高精度地圖的方式,通過所獲得的道路寬度與方向作為檢測車道線的指標(biāo)。文獻(xiàn)[97]提出利用消失點(diǎn)、道路旋轉(zhuǎn)信息和建立幀間相似度模型的方法檢測車道線的連續(xù)性。

4.4評價(jià)指標(biāo)

傳統(tǒng)車道線評價(jià)指標(biāo)多基于主觀觀測的方法,尚沒有形成一種統(tǒng)一有效的車道線系統(tǒng)測試指標(biāo)。文獻(xiàn)[98]設(shè)計(jì)了一個(gè)完善的智能車評價(jià)系統(tǒng),該方法主要側(cè)重于評價(jià)車輛的整體智能化程度。文獻(xiàn)[20]提出車道線檢測系統(tǒng)應(yīng)該滿足以下五點(diǎn)要求:克服道路陰影,可應(yīng)對無明顯車道標(biāo)識的道路,可識別曲線道路,滿足車道線形狀約束以及穩(wěn)定監(jiān)測。文獻(xiàn)[101]提出,車道線系統(tǒng)性能評價(jià)不能局限于檢測率,應(yīng)該采用檢測值與真實(shí)值誤差的方差及變化率,平均絕對值誤差作為性能評價(jià)指標(biāo)。文獻(xiàn)[102]進(jìn)一步提出五種評價(jià)指標(biāo),分別是:車道線特征測量精度,本車自身定位,車道線位置變化率,計(jì)算效率及精度,累計(jì)時(shí)間誤差。

5基于ACP平行理論的車道線檢測系統(tǒng)

由于無法有效模擬各種真實(shí)場景和環(huán)境,車道線檢測性能在未知場景下難以預(yù)測。雖然建立在線評價(jià)與置信度估計(jì)系統(tǒng)可以實(shí)時(shí)評價(jià)當(dāng)前檢測結(jié)果的正確性。當(dāng)發(fā)現(xiàn)有不合理的檢測結(jié)果時(shí),可以及時(shí)通知駕駛員應(yīng)對當(dāng)前問題。然而,這還不足以完全解決車道線檢測算法在設(shè)計(jì)與評價(jià)方面所面臨的問題。

為解決這類問題,本文提出基于ACP平行理論的車道線檢測系統(tǒng)設(shè)計(jì)框架。平行理論是先進(jìn)控制理論和計(jì)算機(jī)仿真系統(tǒng)的產(chǎn)物。平行控制理論首先由王飛躍研究員提出并已成功應(yīng)用到各類復(fù)雜系統(tǒng)控制與管理領(lǐng)域[105]-[107]。建立平行系統(tǒng)的主要目的是連接現(xiàn)實(shí)世界與一個(gè)或多個(gè)人工社會以解決解決模型建模和測試?yán)щy問題。建立平行系統(tǒng)依賴于ACP理論支持。ACP(ArtificialSociety, Computational Experiments, Parallel Execution)由人工社會,計(jì)算實(shí)驗(yàn),和平行執(zhí)行三部分組成。首先對一個(gè)復(fù)雜系統(tǒng)進(jìn)行整體建模,在計(jì)算空間形成一個(gè)對現(xiàn)實(shí)系統(tǒng)的虛擬映射。之后利用計(jì)算實(shí)驗(yàn)在人工社會中對系統(tǒng)進(jìn)行大量的仿真實(shí)驗(yàn),使得虛擬系統(tǒng)可以面對在現(xiàn)實(shí)世界中較少或難以出現(xiàn)的場景。通過大量的實(shí)驗(yàn)計(jì)算,獲得較為完善的系統(tǒng)模型和控制方法,并將模型參數(shù)反饋到現(xiàn)實(shí)物理層。最后,在現(xiàn)實(shí)世界和人工社會平行運(yùn)行和測試該復(fù)雜系統(tǒng)使之不斷完善,最終使難以建模的復(fù)雜系統(tǒng)得到良好控制?;贏CP平行理論,文獻(xiàn)[109]給出了平行視覺系統(tǒng)的構(gòu)建方法。通過利用計(jì)算機(jī)仿真軟件建立與現(xiàn)實(shí)世界相似的人工場景,利用高性能計(jì)算平臺解決計(jì)算機(jī)視覺問題。

將車道線檢測系統(tǒng)引入平行視覺框架,本文設(shè)計(jì)了平行車道線檢測系統(tǒng)框架,如圖4。首先利用仿真軟件建立類似真實(shí)世界的虛擬交通環(huán)境。之后通過計(jì)算實(shí)驗(yàn),將大量計(jì)算機(jī)標(biāo)注過的道路圖像和有限的現(xiàn)實(shí)圖像結(jié)合起來,訓(xùn)練和驗(yàn)證高精度的車道線檢測模型。最后,在平行執(zhí)行階段,通過不斷的在虛擬世界和現(xiàn)實(shí)世界中的測試,將結(jié)果反饋給車道線檢測模型,利用在線學(xué)習(xí)和自我優(yōu)化實(shí)現(xiàn)安全穩(wěn)定的車道線檢測系統(tǒng)。將車道線檢測引入ACP平行視覺框架,利用平行系統(tǒng)模擬所產(chǎn)生的各種環(huán)境下的標(biāo)注數(shù)據(jù),將有效解決車道線檢測系統(tǒng)評價(jià)與測試這一困境,徹底實(shí)現(xiàn)車道線系統(tǒng)的完整測試,使之更加安全穩(wěn)定,并更好的應(yīng)對現(xiàn)實(shí)世界中的突發(fā)情況。

圖4. 基于ACP理論的平行車道線檢測方法

6結(jié)論

本文在算法,集成,和測試三方面論述了車道線檢測技術(shù)的發(fā)展。整體而言,車道線檢測算法可分為基于傳統(tǒng)計(jì)算機(jī)視覺和機(jī)器學(xué)習(xí)兩種方法。作為可以有效提高車道線檢測精度與穩(wěn)定性的技術(shù)手段,車道線檢測的集成方法又分為算法層集成,系統(tǒng)功能層集成和信號層集成。通過分析現(xiàn)階段車道線系統(tǒng)性能評價(jià)與測試的局限性,本文提出了基于ACP平行理論的平行車道線檢測系統(tǒng)設(shè)計(jì)方法。平行車道線檢測技術(shù)將有效地解決車道線性能評價(jià)和測試問題,實(shí)現(xiàn)精確且穩(wěn)定的車道線檢測。

7 Reference

[1]Bellis, Elizabeth, and JimPage. National motor vehicle crash causation survey (NMVCCS) SAS analyticaluser’s manual. No. HS-811 053. 2008.

[2]Gayko, Jens E. "Lanedeparture and lane keeping."Handbookof Intelligent Vehicles. Springer London, 2012. 689-708.

[3]Visvikis C, Smith T L, PitcherM,et al. Study on lane departurewarning and lane change assistant systems.TransportResearch Laboratory Project Rpt PPR, 2008, 374.

[4]Bar Hillel, Aharon, et al."Recent progress in road and lane detection: a survey."Machine vision and applications(2014):1-19.

[5]McCall, Joel C., and Mohan M.Trivedi. "Video-based lane estimation and tracking for driver assistance:survey, system, and evaluation."IEEETransactions on Intelligent Transportation Systems7.1 (2006): 20-37.

[6]Yenikaya,Sibel, G?khan Yenikaya, and Ekrem Düven. "Keeping the vehicle on the road:A survey on on-road lane detection systems."ACM Computing Surveys(CSUR)46.1 (2013): 2.

[7]Fritsch,Jannik, Tobias Kuhnl, and Andreas Geiger. "A new performance measure andevaluation benchmark for road detection algorithms."IntelligentTransportation Systems-(ITSC), 2013 16th International IEEE Conference on.IEEE, 2013.

[8]Beyeler,Michael, Florian Mirus, and Alexander Verl. "Vision-based robust road lanedetection in urban environments."Robotics and Automation (ICRA), 2014IEEE International Conference on. IEEE, 2014.

[9]Kang, Dong-Joong, and Mun-Ho Jung. "Roadlane segmentation using dynamic programming for active safety vehicles."Pattern Recognition Letters24.16(2003): 3177-3185.

[10]Suddamalla, Upendra, et al."A novel algorithm of lane detection addressing varied scenarios of curvedand dashed lanemarks."ImageProcessing Theory, Tools and Applications (IPTA), 2015 International Conferenceon. IEEE, 2015.

[11]Collado, Juan M., et al. "Adaptiveroad lanes detection and classification."International Conference on Advanced Concepts for Intelligent VisionSystems. Springer Berlin Heidelberg, 2006.

[12]Sehestedt, Stephan, et al."Robust lane detection in urban environments."Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ InternationalConference on. IEEE, 2007.

[13]Lin, Qing, Young joon Han, andHernsoo Hahn. "Real-time lane departure detection based on extendededge-linking algorithm."ComputerResearch and Development, 2010 Second International Conference on. IEEE,2010.

[14]Cela, Andrés F., et al."Lanes Detection Based on Unsupervised and Adaptive Classifier."Computational Intelligence, Communication Systems and Networks (CICSyN), 2013Fifth International Conference on. IEEE, 2013.

[15]Borkar, Amol, et al. "Alayered approach to robust lane detection at night." Computational Intelligencein Vehicles and Vehicular Systems, 2009. CIVVS'09. IEEE Workshop on. IEEE,2009.

[16]Kreucher,Chris, and Sridhar Lakshmanan. "LANA: a lane extraction algorithm thatuses frequency domain features."IEEETransactions on Robotics and Automation15.2 (1999): 343-350.

[17]Jung,Soonhong, Junsic Youn, and Sanghoon Sull. "Efficient lane detection basedon spatiotemporal images."IEEETransactions on Intelligent Transportation Systems17.1 (2016): 289-295.

[18]Xiao,Jing, Shutao Li, and Bin Sun. "A Real-Time System for Lane Detection Basedon FPGA and DSP."Sensing andImaging17.1 (2016): 1-13.

[19]Ozgunalp, Umar, and NaimDahnoun. "Lane detection based on improved feature map and efficientregion of interest extraction."Signaland Information Processing (GlobalSIP), 2015 IEEE Global Conference on.IEEE, 2015.

[20]Wang, Yue, Dinggang Shen, andEam Khwang Teoh. "Lane detection using spline model."Pattern Recognition Letters21.8 (2000):677-689.

[21]Wang, Yue, Eam Khwang Teoh, andDinggang Shen. "Lane detection and tracking using B-Snake." Image andVision Computing 22.4 (2004): 269-280.

[22]Li, Xiangyang, et al."Lane detection and tracking using a parallel-snake approach."Journal of Intelligent & Robotic Systems77.3-4 (2015): 597.

[23]Lim,King Hann, Kah Phooi Seng, and Li-Minn Ang. "River flow lane detection andKalman filtering-based B-spline lane tracking."International Journal of Vehicular Technology2012 (2012).

[24]Jung, Cláudio Rosito, andChristian Roberto Kelber. "An improved linear-parabolic model for lanefollowing and curve detection."ComputerGraphics and Image Processing, 2005. SIBGRAPI 2005. 18th Brazilian Symposium on.IEEE, 2005.

[25]Aly, Mohamed. "Real timedetection of lane markers in urban streets."Intelligent Vehicles Symposium, 2008 IEEE. IEEE, 2008.

[26]Borkar, Amol, Monson Hayes, and Mark T. Smith."Robust lane detection and tracking with ransac and kalman filter."Image Processing (ICIP), 2009 16th IEEEInternational Conference on. IEEE, 2009.

[27]Lopez, A., et al. "Detectionof Lane Markings based on Ridgeness and RANSAC."Intelligent Transportation Systems, 2005. Proceedings. 2005 IEEE.IEEE, 2005.

[28]López, A., et al. "Robustlane markings detection and road geometry computation."International Journal of Automotive Technology11.3 (2010): 395-407.

[29]Chen, Qiang, and Hong Wang."A real-time lane detection algorithm based on a hyperbola-pairmodel."Intelligent VehiclesSymposium, 2006 IEEE. IEEE, 2006.

[30]Tan, Huachun, et al. "Improved river flow and random sampleconsensus for curve lane detection."Advancesin Mechanical Engineering7.7 (2015): 1687814015593866.

[31]Hur,Junhwa, Seung-Nam Kang, and Seung-Woo Seo. "Multi-lane detection in urbandriving environments using conditional random fields."Intelligent Vehicles Symposium (IV), 2013IEEE. IEEE, 2013

[32]Bounini,Farid, et al. "Autonomous Vehicle and Real Time Road Lanes Detection andTracking."Vehicle Power andPropulsion Conference (VPPC), 2015 IEEE. IEEE, 2015.

[33]Wu, Dazhou, Rui Zhao, and Zhihua Wei. "Amulti-segment lane-switch algorithm for efficient real-time lanedetection."Information andAutomation (ICIA), 2014 IEEE International Conference on. IEEE, 2014.

[34]Zhou,Shengyan, et al. "A novel lane detection based on geometrical model andgabor filter."Intelligent Vehicles Symposium(IV), 2010 IEEE. IEEE, 2010.

[35]Niu, Jianwei, et al."Robust Lane Detection using Two-stage Feature Extraction with CurveFitting."Pattern Recognition59(2016): 225-233.

[36]He, Bei, et al. "Lanemarking detection based on Convolution Neural Network from point clouds."Intelligent Transportation Systems (ITSC),2016 IEEE 19th International Conference on. IEEE, 2016.

[37]Li,Jun, Xue Mei, and Danil Prokhorov. "Deep neural network for structuralprediction and lane detection in traffic scene."IEEE transactions on neural networks and learning systems(2016).

[38]Gurghian,Alexandru, et al. "DeepLanes: End-To-End Lane Position Estimation UsingDeep Neural Networks."Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition Workshops.2016

[39]Li, Xue, et al. "Lanedetection based on spiking neural network and hough transform."Image and Signal Processing (CISP), 2015 8thInternational Congress on. IEEE, 2015.

[40]Kim, Jihun, et al. "Fastlearning method for convolutional neural networks using extreme learningmachine and its application to lane detection."Neural Networks(2016).

[41]He, Bei, et al. "Accurate and robust lane detectionbased on Dual-View Convolutional Neural Network."Intelligent Vehicles Symposium (IV), 2016 IEEE. IEEE, 2016.

[42]Revilloud, Marc, Dominique Gruyer, and Mohamed-CherifRahal. "A new multi-agent approach for lane detection and tracking."Robotics and Automation (ICRA), 2016 IEEEInternational Conference on. IEEE, 2016.

[43]Bertozzi,Massimo, et al. "An evolutionary approach to lane markings detection inroad environments."Atti del6 (2002): 627-636.

[44]Tsai, Luo-Wei, et al."Lane detection using directional random walks."Intelligent Vehicles Symposium, 2008 IEEE. IEEE, 2008.

[45]Bai, Li, and Yan Wang."Road tracking using particle filters with partition sampling andauxiliary variables."ComputerVision and Image Understanding115.10 (2011): 1463-1471.

[46]Danescu, Radu, and SergiuNedevschi. "Probabilistic lane tracking in difficult road scenarios usingstereovision."IEEE Transactions onIntelligent Transportation Systems10.2 (2009): 272-282.

[47]Kim, ZuWhan. "Robust lanedetection and tracking in challenging scenarios."IEEE Transactions on Intelligent Transportation Systems9.1 (2008):16-26.

[48]Shin, Bok-Suk, Junli Tao, andReinhard Klette. "A super particle filter for lane detection."Pattern Recognition48.11 (2015):3333-3345.

[49]Das,Apurba, Siva Srinivasa Murthy, and Upendra Suddamalla. "Enhanced Algorithmof Automated Ground Truth Generation and Validation for Lane Detection Systemby M2BMT"IEEE Transactions onIntelligent Transportation Systems(2016).

[50]Labayrade, Raphael, S. S. Leng,and Didier Aubert. "A reliable road lane detector approach combining twovision-based algorithms."IntelligentTransportation Systems, 2004. Proceedings. The 7th International IEEEConference on. IEEE, 2004.

[51]Labayrade, Rapha?l, et al. "A reliable and robust lanedetection system based on the parallel use of three algorithms for drivingsafety assistance."IEICEtransactions on information and systems89.7,2006: 2092-2100.

[52]Hernández, Danilo Cáceres,Dongwook Seo, and Kang-Hyun Jo. "Robust lane marking detection based onmulti-feature fusion."Human SystemInteractions (HSI), 2016 9th International Conference on. IEEE, 2016.

[53]Yim, Young Uk, and Se-Young Oh."Three-feature based automatic lane detection algorithm (TFALDA) forautonomous driving."IEEETransactions on Intelligent Transportation Systems4.4 (2003): 219-225.

[54]Felisa, Mirko, and Paolo Zani."Robust monocular lane detection in urban environments."Intelligent Vehicles Symposium (IV), 2010IEEE. IEEE, 2010.

[55]Bertozzi, Massimo, and AlbertoBroggi. "GOLD: A parallel real-time stereo vision system for genericobstacle and lane detection."IEEEtransactions on image processing7.1 (1998): 62-81.

[56]Huang, Albert S., et al. "Finding multiple lanes inurban road networks with vision and lidar."Autonomous Robots26.2 (2009): 103-122.

[57]Cheng, Hsu-Yung, et al. "Lane detection with moving vehicles inthe traffic scenes."IEEETransactions on intelligent transportation systems7.4 (2006): 571-582.

[58]Sivaraman, Sayanan, and MohanManubhai Trivedi. "Integrated lane and vehicle detection, localization,and tracking: A synergistic approach."IEEETransactions on Intelligent Transportation Systems14.2 (2013): 906-917.

[59]Wu, Chi-Feng, Cheng-Jian Lin,and Chi-Yung Lee. "Applying a functional neurofuzzy network to real-timelane detection and front-vehicle distancemeasurement."IEEE Transactions onSystems, Man, and Cybernetics, Part C (Applications and Reviews)42.4(2012): 577-589.

[60]Huang,Shih-Shinh, et al. "On-board vision system for lane recognition andfront-vehicle detection to enhance driver's awareness."Robotics and Automation, 2004. Proceedings.ICRA'04. 2004 IEEE International Conference on. Vol. 3. IEEE, 2004.

[61]Satzoda, Ravi Kumar, and MohanM. Trivedi. "Efficient lane and vehicle detection with integratedsynergies (ELVIS)."Computer Visionand Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. IEEE,2014.

[62]Kim, Huieun, et al. "Integration of vehicle and lanedetection for forward collision warning system."Consumer Electronics-Berlin (ICCE-Berlin), 2016 IEEE 6th InternationalConference on. IEEE, 2016.

[63]Qin, B., et al. "A generalframework for road marking detection and analysis."Intelligent Transportation Systems-(ITSC), 2013 16th International IEEEConference on. IEEE, 2013.

[64]Kheyrollahi, Alireza, and TobyP. Breckon. "Automatic real-time road marking recognition using a featuredriven approach."Machine Vision andApplications23.1 (2012): 123-133.

[65]Greenhalgh, Jack, and MajidMirmehdi. "Detection and Recognition of Painted Road SurfaceMarkings."ICPRAM (1). 2015.

[66]Oliveira, Gabriel L., WolframBurgard, and Thomas Brox. "Efficient deep models for monocular roadsegmentation."Intelligent Robotsand Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016.

[67]Kong, Hui, Jean-Yves Audibert,and Jean Ponce. "Vanishing point detection for road detection."Computer Vision and Pattern Recognition,2009. CVPR 2009. IEEE Conference on. IEEE, 2009.

[68]Levi, Dan, et al."StixelNet: A Deep Convolutional Network for Obstacle Detection and RoadSegmentation."BMVC. 2015.

[69]Stein, Gideon P., YoramGdalyahu, and Amnon Shashua. "Stereo-assist: Top-down stereo for driverassistance systems."IntelligentVehicles Symposium (IV), 2010 IEEE. IEEE, 2010.

[70]Raphael, Eric, et al. "Development of a camera-based forwardcollision alert system."SAEInternational Journal of Passenger Cars-Mechanical Systems4.2011-01-0579,2011: 467-478.

[71]Ma, Bing, S. Lakahmanan, andAlfred Hero. "Road and lane edge detection with multisensor fusionmethods."Image Processing, 1999.ICIP 99. Proceedings. 1999 InternationalConference on. Vol. 2. IEEE, 1999.

[72]Beyeler,Michael, Florian Mirus, and Alexander Verl. "Vision-based robust road lanedetection in urban environments."Roboticsand Automation (ICRA), 2014 IEEE International Conference on. IEEE, 2014.

[73]Ozgunalp,Umar, et al. "Multiple Lane Detection Algorithm Based on Novel DenseVanishing Point Estimation."IEEETransactions on Intelligent Transportation Systems18.3 (2017): 621-632.

[74]Lipski, Christian, et al."A fast and robust approach to lane marking detection and lanetracking."Image Analysis andInterpretation, 2008. SSIAI 2008. IEEE Southwest Symposium on. IEEE, 2008.

[75]Kim, Dongwook, et al."Lane-level localization using an AVM camera for an automated drivingvehicle in urban environments."IEEE/ASMETransactions on Mechatronics22.1 (2017): 280-290.

[76]Jung, H. G., et al."Sensor fusion-based lane detection for LKS+ ACC system."International journal of automotivetechnology10.2 (2009): 219-228.

[77]Cui, Dixiao, Jianru Xue, andNanning Zheng. "Real-Time Global Localization of Robotic Cars in LaneLevel via Lane Marking Detection and Shape Registration."IEEE Transactions on IntelligentTransportation Systems17.4 (2016): 1039-1050.

[78]Jiang, Yan, Feng Gao, andGuoyan Xu. "Computer vision-based multiple-lane detection on straight roadand in a curve."Image Analysis andSignal Processing (IASP), 2010 International Conference on. IEEE, 2010.

[79]Rose, Christopher, et al."An integrated vehicle navigation system utilizing lane-detection andlateral position estimation systems in difficult environments for GPS."IEEE Transactions on IntelligentTransportation Systems15.6 (2014): 2615-2629.

[80]Li, Qingquan, et al. "Asensor-fusion drivable-region and lane-detection system for autonomous vehiclenavigation in challenging road scenarios."IEEE Transactions on Vehicular Technology63.2 (2014): 540-555.

[81]Kammel, Soren, and BenjaminPitzer. "Lidar-based lane marker detection and mapping."Intelligent Vehicles Symposium, 2008 IEEE.IEEE, 2008.

[82]Manz, Michael, et al. "Detection and tracking of road networksin rural terrain by fusing vision and LIDAR."Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ InternationalConference on. IEEE, 2011.

[83]Schreiber, Markus, CarstenKn?ppel, and Uwe Franke. "Laneloc: Lane marking based localization usinghighly accurate maps."IntelligentVehicles Symposium (IV), 2013 IEEE. IEEE, 2013.

[84]Clanton J M, Bevly D M, Hodel AS. A low-cost solution for an integrated multisensor lane departure warningsystem[J]. IEEE Transactions on Intelligent Transportation Systems, 2009,10(1): 47-59.

[85]Montemerlo, Michael, et al."Junior: The stanford entry in the urban challenge."Journal of field Robotics25.9 (2008):569-597.

[86]Buehler, Martin, Karl Iagnemma, and Sanjiv Singh, eds.The DARPA urban challenge: autonomous vehiclesin city traffic. Vol. 56. Springer, 2009.

[87]Lindner, Philipp, et al. "Multi-channel lidarprocessing for lane detection and estimation."Intelligent Transportation Systems, 2009. ITSC'09. 12th InternationalIEEE Conference on. IEEE, 2009.

[88]Shin,Seunghak, Inwook Shim, and In So Kweon. "Combinatorial approach for lanedetection using image and LIDAR reflectance."Ubiquitous Robots andAmbient Intelligence (URAI), 2015 12th International Conference on. IEEE,2015.

[89]Amaradi, Phanindra, et al."Lane following and obstacle detection techniques in autonomous drivingvehicles."Electro InformationTechnology (EIT), 2016 IEEE International Conference on. IEEE, 2016.

[90]Dietmayer, Klaus, et al."Roadway detection and lane detection using multilayer laser scanner."Advanced Microsystems for AutomotiveApplications 2005. Springer Berlin Heidelberg, 2005. 197-213.

[91]Hernandez, Danilo Caceres,Van-Dung Hoang, and Kang-Hyun Jo. "Lane surface identification based onreflectance using laser range finder."SystemIntegration (SII), 2014 IEEE/SICE International Symposium on. IEEE, 2014.

[92]Sparbert, Jan, Klaus Dietmayer,and Daniel Streller. "Lane detection and street type classification usinglaser range images."IntelligentTransportation Systems, 2001. Proceedings. 2001 IEEE. IEEE, 2001.

[93]Broggi, Alberto, et al. "Alaser scanner-vision fusion system implemented on the terramax autonomousvehicle."Intelligent Robots andSystems, 2006 IEEE/RSJ International Conference on. IEEE, 2006.

[94]Zhao, Huijing, et al. "A laser-scanner-based approach towarddriving safety and traffic data collection."IEEE Transactions on intelligent transportation systems10.3(2009): 534-546.

[95]Borkar, Amol, Monson Hayes, and Mark T. Smith. "Anovel lane detection system with efficient ground truth generation."IEEE Transactions on IntelligentTransportation Systems13.1 (2012): 365-374.

[96]Lin,Chun-Wei, Han-Ying Wang, and Din-Chang Tseng. "A robust lane detection andverification method for intelligent vehicles."Intelligent Information Technology Application, 2009. IITA 2009. ThirdInternational Symposium on. Vol. 1. IEEE, 2009.

[97]Yoo, Ju Han, et al. "ARobust Lane Detection Method Based on Vanishing Point Estimation Using theRelevance of Line Segments."IEEETransactions on Intelligent Transportation Systems(2017).

[98]Li, Li, et al."Intelligence Testing for Autonomous Vehicles: A New Approach."IEEE Transactions on Intelligent Vehicles1.2(2016): 158-166.

[99]Kluge, Karl C."Performance evaluation of vision-based lane sensing: Some preliminarytools, metrics, and results."IntelligentTransportation System, 1997. ITSC'97., IEEE Conference on. IEEE, 1997.

[100]Veit,Thomas, et al. "Evaluation of road marking feature extraction."Intelligent Transportation Systems, 2008. ITSC 2008. 11th International IEEEConference on. IEEE, 2008.

[101]McCall, Joel C., and Mohan M. Trivedi. "Performance evaluationof a vision based lane tracker designed for driver assistance systems."Intelligent Vehicles Symposium, 2005.Proceedings. IEEE. IEEE, 2005.

[102]Satzoda, Ravi Kumar, and Mohan M. Trivedi. "On performanceevaluation metrics for lane estimation."Pattern Recognition (ICPR), 2014 22nd International Conference on.IEEE, 2014.

[103]Jung, Claudio Rosito, and Christian Roberto Kelber. "A robustlinear-parabolic model for lane following."Computer Graphics and Image Processing, 2004. Proceedings. 17thBrazilian Symposium on. IEEE, 2004.

[104]Haloi, Mrinal, and Dinesh Babu Jayagopi. "A robust lanedetection and departure warning system."Intelligent Vehicles Symposium (IV), 2015 IEEE. IEEE, 2015.

[105]F. Y. Wang, “Parallel system methods for management and control of complexsystems,”Control Decision, vol. 19,no. 5, pp. 485-489, 514, May 2004.

[106]F. Y. Wang, “Parallel control and management for intelligenttransportation systems: Concepts, architectures, and applications,”IEEE Trans .Intell. Transp. Syst., vol.11, no. 3, pp. 630-638, Sep. 2010.

[107]F. Y. Wang, “Artificial societies, computational experiments, andparallel systems: A discussion on computational theory of complex socialeconomic systems,”Complex Syst.Complexity Sci., vol. 1, no. 4, pp.25-35, Oct.

[108]L. Li, Y. L. Lin, D. P. Cao, N. N. Zheng, and F. Y. Wang, “Parallellearning-a new framework for machine learning,”Acta Automat. Sin., vol. 43, no. 1, pp. 1-18, Jan. 2017.

[109]K. F. Wang, C. Gou, N. N. Zheng, J. M. Rehg, and F. Y. Wang,“Parallel vision for perception and understanding of complex scenes: methods,framework, and perspectives,”Artif. Intell. Rev.,vol. 48, no. 3, pp.298-328, Oct. 2017.

[110]Wang,F.Y., Zheng, N.N., Cao, D., et al. Parallel driving in CPSS: a unified approachfor transport automation and vehicle intelligence. IEEE/CAA Journal of AutomaticaSinica, 2017, 4(4), pp.577-587.

[111]Lv, C., Liu, Y., Hu, X., Guo, H., Cao, D. andWang, F.Y. Simultaneous observation of hybrid states for cyber-physicalsystems: A case study of electric vehicle powertrain. IEEE transactions oncybernetics, 2017.

[112]Silver,David, et al. "Mastering the game of Go with deep neural networks and treesearch."Nature529.7587 (2016):484-489.

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報(bào)投訴
  • ACP
    ACP
    +關(guān)注

    關(guān)注

    0

    文章

    5

    瀏覽量

    7834

原文標(biāo)題:一種基于ACP理論的平行車道線檢測方法,能有效解決目前車道線檢測的困境

文章出處:【微信號:IV_Technology,微信公眾號:智車科技】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏

    評論

    相關(guān)推薦

    基于樹莓派設(shè)計(jì)的道路車道檢測系統(tǒng)

    自動駕駛汽車是現(xiàn)代世界的新趨勢之一。他們使用非常復(fù)雜的控制系統(tǒng)和工程技術(shù)來操縱車輛。道路車道檢測是車輛導(dǎo)航中的重要內(nèi)容之一。在這里,我描述了一個(gè)使用 Raspberry pi 3 和計(jì)算機(jī)視覺
    的頭像 發(fā)表于 03-31 10:41 ?4334次閱讀
    基于樹莓派設(shè)計(jì)的道路<b class='flag-5'>車道</b><b class='flag-5'>檢測</b><b class='flag-5'>系統(tǒng)</b>

    基于分形理論的高效機(jī)器視覺檢測系統(tǒng)

    基于分形理論的高效機(jī)器視覺檢測系統(tǒng)
    發(fā)表于 08-03 23:43

    汽車先進(jìn)駕駛員輔助系統(tǒng)ADAS:車道偏離告警系統(tǒng)資料分享

    的單個(gè)攝像機(jī)監(jiān)測車道標(biāo)識,測量和監(jiān)控本車與道路邊界的距離。該系統(tǒng)車道偏離警告模塊通過檢測道路邊界,計(jì)算車輛相對于
    發(fā)表于 11-06 09:23

    如何實(shí)現(xiàn)車道分割

    深度學(xué)習(xí)方法實(shí)現(xiàn)車道分割之二(自動駕駛車道分割)
    發(fā)表于 05-22 10:16

    怎么實(shí)現(xiàn)單目視覺車道偏離報(bào)警系統(tǒng)的設(shè)計(jì)?

    怎么實(shí)現(xiàn)單目視覺車道偏離報(bào)警系統(tǒng)的設(shè)計(jì)?
    發(fā)表于 05-13 06:06

    基于圖像的車道檢測

    基于圖像的車道檢測,點(diǎn)擊上方“3D視覺工坊”,選擇“星標(biāo)”干貨第一時(shí)間送達(dá)文章導(dǎo)讀本文是一篇從零開始做車道
    發(fā)表于 07-20 06:24

    單片機(jī)車道檢測模型的相關(guān)資料分享

    本篇文章為車道檢測模型系列文章的第四篇,第一篇介紹了模型所使用的單片機(jī)和開發(fā)板,第二篇介紹了實(shí)時(shí)操作系統(tǒng)RTOS,第三篇介紹了所用到的攝像頭和LCD觸摸屏外設(shè),想了解的朋友點(diǎn)擊:(一
    發(fā)表于 11-25 08:02

    基于分形理論的高效機(jī)器視覺檢測系統(tǒng)

    基于分形理論的高效機(jī)器視覺檢測系統(tǒng) 現(xiàn)代工業(yè)的發(fā)展對產(chǎn)品制件表面的檢測提出了很高的要求,表面加工微觀特性的評定從定性綜合評定轉(zhuǎn)向了定量,標(biāo)
    發(fā)表于 05-30 16:10 ?14次下載

    基于機(jī)器視覺車道偏離預(yù)警系統(tǒng)的實(shí)現(xiàn)

    基于機(jī)器視覺車道偏離預(yù)警系統(tǒng)的實(shí)現(xiàn) 摘要:目前高速公路上由于車道偏離而導(dǎo)致的交通事故造成了巨大的損失,從而使得車道偏離預(yù)警
    發(fā)表于 12-24 09:49 ?1331次閱讀
    基于機(jī)器<b class='flag-5'>視覺</b>的<b class='flag-5'>車道</b>偏離預(yù)警<b class='flag-5'>系統(tǒng)</b>的實(shí)現(xiàn)

    基于邊界特征的車道標(biāo)識檢測方法

    為了得到較理想的車道的標(biāo)線邊緣,利用車道的邊緣特征對車道圖像進(jìn)行二值化和形態(tài)學(xué)處理,對車道區(qū)域?qū)崿F(xiàn)準(zhǔn)確的邊緣檢測,最后利用Hough變換定位
    發(fā)表于 01-13 09:48 ?54次下載
    基于邊界特征的<b class='flag-5'>車道</b>標(biāo)識<b class='flag-5'>線</b><b class='flag-5'>檢測</b>方法

    單目視覺車道識別算法及其ARM實(shí)現(xiàn)

    單目視覺車道識別算法及其ARM實(shí)現(xiàn)
    發(fā)表于 09-24 11:38 ?6次下載
    單目<b class='flag-5'>視覺</b><b class='flag-5'>車道</b><b class='flag-5'>線</b>識別算法及其ARM實(shí)現(xiàn)

    一套車道檢測系統(tǒng)

    車道檢測主要用于駕駛輔助和無人駕駛系統(tǒng),根據(jù)攝像頭數(shù)量,分為單目和雙目兩種檢測系統(tǒng)。出于實(shí)時(shí)性
    發(fā)表于 01-31 11:26 ?1次下載
    一套<b class='flag-5'>車道</b><b class='flag-5'>線</b><b class='flag-5'>檢測</b><b class='flag-5'>系統(tǒng)</b>

    利用激光雷達(dá)檢測車道的4種方法

    通過理論分析和實(shí)驗(yàn)驗(yàn)證可知一二兩層返回的信息主要包括路面、車道、少量障礙物和邊界數(shù)據(jù);三四兩層主要返回道路邊界、障礙物和少量路表信息,所以在特征種子點(diǎn)提取階段需要重點(diǎn)分析一二兩層的雷達(dá)數(shù)據(jù),這部分?jǐn)?shù)據(jù)中對于
    發(fā)表于 05-25 01:57 ?1.1w次閱讀
    利用激光雷達(dá)<b class='flag-5'>檢測</b><b class='flag-5'>車道</b><b class='flag-5'>線</b>的4種方法

    基于雷達(dá)掃描檢測車道的四種方法

    基于視覺系統(tǒng)車道檢測有諸多缺陷。 首先,視覺系統(tǒng)對背景光線很敏感,諸如陽光強(qiáng)烈的林蔭道,車道
    發(fā)表于 03-07 14:02 ?3177次閱讀
    基于雷達(dá)掃描<b class='flag-5'>檢測</b><b class='flag-5'>車道</b><b class='flag-5'>線</b>的四種方法

    汽車電子的lidar檢測車道原理分析

    相機(jī)的光軸基本與地面平行,相機(jī)2D車道成像和BEV視圖可以視為在兩個(gè)不同的視角下車道的成像。如果我們能類比圖像拼接的方法,將相機(jī)視圖“拼
    發(fā)表于 02-07 09:33 ?699次閱讀
    RM新时代网站-首页