版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
1、Robust Analysis of Feature Spaces: Color Image Segmentation Dorin Comaniciu Peter Meer Department of Electrical and Computer Engineering Rutgers University, Piscataway, NJ 08855, USA comanici @caip.rutgers.edu Abstr
2、act A general technique for the recovery o fsignificant im- age features is presented. The technique is based on the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks
3、of the current methods (including robust clustering) are avoided. Feature space of any nature can be processed, and as an example, color image segmentation is discussed. The segmentation is completely autonomous, on
4、ly its class is chosen by the usel: Thus, the same program can produce a high quality edge image, or provide, by extracting all the significant colors, a preprocessorfor content-based query systems. A 512 x 512 color
5、 image is analyzed in less than 1 0seconds on a stan- dard workstation. Gray level images are handled as color images having only the lightness coordinate. 1. Introduction Feature space analysis is a widely used tool f
6、or solv- ing low-level image understanding tasks. Given an image, feature vectors are extracted from local neighborhoods and mapped into the space spanned by their components. Signif- icant features in the image the
7、n correspond to high density regions in this space. Feature space analysis is the procedure of recovering the centers of the high density regions, i.e., the representations of the significant image features. Histogr
8、am based techniques, Hough transform are examples of the ap- proach. To avoid the artifacts of quantization a feature space should have a continuous coordinate system. The content of a continuous feature space can
9、be modeled as a sam- ple from a multivariate, multimodal probability distribution. The highest density regions correspond to clusters centered on the modes of the underlying probability distribution. Tra- ditional c
10、lustering techniques [5], can be used but they are reliable only if the number of clusters is small and known a priori. There is no theoretical evidence that individual clus- ters obey multivariate normal distributi
11、ons and an extracted normal cluster necessarily corresponds to a significant im- age feature. On the contrary, a strong artifact cluster may appear when several features are mapped into partially over- lapping region
12、s. meer @ caip.rutgers.edu Nonparametric density estimation [4, Chap. 61 avoids the use of the normality assumption, however it requires ad- ditional input information (type of the kernel, number of neighbors). Thi
13、s information must be provided by the user, and for multimodal distributions it is difficult to guess the optimal setting. Nevertheless, a reliable general technique for feature space analysis can be developed using
14、a simple nonparametric density estimation algorithm. In this paper we propose such a technique whose robust behavior is supe- rior to methods employing robust estimators from statistics. 2. Requirements for Robustn
15、ess Estimation of a cluster center is called in statistics the multivariate location problem. To be robust, an estimator must tolerate a percentage of outliers, i.e., data points not obeying the underlying distributio
16、n of the cluster. Numer- ous robust techniques were proposed [9, Sec. 7.11, and in computer vision the most widely used is the minimum vol- ume ellipsoid ( W E ) estimator proposed by Rousseeuw [9]. Based on MVE,
17、a robust clustering technique with appli- cations in computer vision was proposed in [6]. The number of significant clusters was not needed a priori. The robust clustering method was successfully employed for the
18、analy- sis of a large variety of feature spaces, but was found to be- come less reliable once the number of modes exceeded ten. This is mainly due to constraining the shape of the removed clusters to be elliptical. F
19、urthermore, the estimated covari- ance matrices (the shape parameters of the ellipsoids) are not reliable since are based on only p + 1 points. To be able to correctly recover a large number of signif- icant f
20、eatures, the problem of feature space analysis must be solved in context. In image understanding tasks the data to be analyzed originates in the image domain. That is, the feature vectors satisfy additional, spatial
21、constraints. While these constraints are indeed used in the current techniques, their role is mostly limited to compensating for feature al- location errors made during the independent analysis of the feature space.
22、To be robust the feature space analysis must fully exploit the image domain information. As a consequence of the increased role of image domain information the burden on the feature space analysis can be reduced.
23、First all the significant features are extracted, and 1063-6919/97 $10.00 0 1997 IEEE 750 0.1741 is shown with a star at the top of Figure 1. Other, more adaptive strategies for setting the search window size can als
24、o be defined. The mean shift algorithm is the tool needed for feature space analysis. The outline of a general procedure is given below. Feature Space Analysis 1. Map the image domain into the feature space. 2. Defi
25、ne an adequate number of search windows at ran- 3. Find the high density region centers by applying the 4. Validate the extracted centers with image domain con- 5. Allocate, using image domain information, all the fea- T
26、he procedure is very general and applicable to any feature space. In the next section we describe a color image seg- mentation technique developed based on this outline. dom locations in the space. mean shift algorith
27、m to each window. straints to provide the feature palette. ture vectors to the feature palette. 4. Color Image Segmentation Image segmentation, partioning the image into homoge- neous regions, is a challenging task.
28、 The richness of vi- sual information makes bottom-up, solely image driven ap- proaches always prone to errors. To be reliable, the cur- rent systems must be large and incorporate numerous ad-hoc procedures, e.g.
29、[l]. The paradigms of gray level image segmentation (pixel-based, area-based, edge-based) are also used for color images. In addition, the physics-based meth- ods take into account information about the image forma-
30、 tion processes as well [7]. The proposed segmentation tech- nique does not consider the physical processes, it uses only the given image, i.e., a set of RGB vectors. Nevertheless, can be easily extended to incorpor
31、ate supplementary infor- mation about the input. Since perfect segmentation cannot be achieved without a top-down, knowledge driven component, a bottom-up seg- mentation technique should e only provide the input i
32、nto the next stage where the task is accomplished using a priori knowledge about its goal; and 0 eliminate, as much as possible, the dependence on user set parameter values. Segmentation resolution is the most ge
33、neral parameter char- acterizing a segmentation technique. While this parameter has a continuous scale, three important classes can be dis- tinguished. Undersegmentation corresponds to the lowest resolution. Homo
34、geneity is defined with a large tolerance margin and only the most significant colors are retained for the feature palette. The region boundaries in a correctly undersegmented image are the dominant edges in the ima
35、ge. Oversegmentation corresponds to intermediate resolution. The feature palette is rich enough that the image is bro- ken into many small regions from which any sought in- formation can be assembled under knowle
36、dge control. Oversegmentation is the recommended class when the goal of the task is object recognition. Quantization corresponds to the highest resolution. The feature palette contains all the important colors in the
37、 image. This segmentation class became important with the spread of image databases, e.g., [3, 81. The full palette, possibly together with the underlying spatial structure, is essential for content-based queries.
38、The proposed color segmentation technique operates in any of the these three classes. The user only chooses the desired class, the specific operating conditions are derived automat- ically by the program. Images
39、 are usually stored and displayed in the RGB space. However, to ensure the isotropy of the feature space, a uniform color space with the perceived color differences measured by Euclidean distances should be used. We h
40、ave chosen the L*u*v* space [lo, Sec. 3.3.91, whose coordi- nates are related to the RGB values by nonlinear transfor- mations. The daylight standard D 6 5was used as reference illuminant. The chromatic information
41、 is carried by U* and U*, while the lightness coordinate L* can be regarded as the relative brightness. Psychophysical experiments show that L*u*v* space may not be perfectly isotropic [lo, p. 3111, however, it wa
42、s found satisfactory for image understanding applications. The image capture/display operations also in- troduce deviations which are most often neglected. The steps of color image segmentation are presented be- low
43、. The acronyms ID and FS stand for image domain and feature space respectively. All feature space computations are performed in the L*u*v* space. 1. [FS] Dejinition o fthe segmentation parameters. The user only indi
44、cates the desired class of segmentation. The class definition is translated into three parameters 0 the radius of the search window, r ; e the smallest number of elements required for a signifi- 0 the smallest number
45、of contiguous pixels required for The size of the search window determines the resolution of the segmentation, smaller values corresponding to higher resolutions. Within the same segmentation class an image containi
46、ng large homogeneous regions should be analyzed at higher resolution than an image with many textured ar- eas. The simplest measure of the “visual activity” can be derived from the global covariance matrix. The squar
47、e root of its trace, 0, is related to the power of the signal (image). The radius r is taken proportional to 0. The rules defining the three segmentation class parameters are given in Table 1. cant color, “in; a s
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文翻譯---特征空間穩(wěn)健性分析:彩色圖像分割 英文.pdf
- 外文翻譯---特征空間穩(wěn)健性分析:彩色圖像分割 英文.pdf
- 外文翻譯---特征空間穩(wěn)健性分析彩色圖像分割
- 外文翻譯---特征空間穩(wěn)健性分析彩色圖像分割
- 外文翻譯---特征空間穩(wěn)健性分析彩色圖像分割
- 外文翻譯---特征空間穩(wěn)健性分析:彩色圖像分割
- 外文翻譯---特征空間穩(wěn)健性分析:彩色圖像分割.docx
- 外文翻譯---特征空間穩(wěn)健性分析:彩色圖像分割.docx
- 特征空間穩(wěn)健性分析彩色圖像分割畢業(yè)論文外文翻譯
- 外文翻譯--基于偏微分方程的彩色圖像分割(英文)
- 外文翻譯--基于偏微分方程的彩色圖像分割(英文).pdf
- 外文翻譯--基于偏微分方程的彩色圖像分割(英文).pdf
- 外文翻譯--圖像分割
- 外文翻譯--圖像分割
- 外文翻譯--基于偏微分方程的彩色圖像分割
- 外文翻譯--基于偏微分方程的彩色圖像分割
- 圖片分割外文翻譯--一種可重構(gòu)的嵌入式彩色圖像分割處理軟件系統(tǒng)(英文)
- 彩色圖像分割.pdf
- 圖片分割外文翻譯--一種可重構(gòu)的嵌入式彩色圖像分割處理軟件系統(tǒng)(英文).pdf
- 圖片分割外文翻譯--一種可重構(gòu)的嵌入式彩色圖像分割處理軟件系統(tǒng)(英文).pdf
評論
0/150
提交評論