文件名称:
利用深度学习检测复杂货物X射线成像中的隐蔽车辆.pdf
开发工具:
文件大小: 9mb
下载次数: 0
上传时间: 2019-07-14
详细说明:Non-intrusive inspection systerms based on X-ray radiography techriques are rou tinely used at transport hubs to ensure the conforrmity of catgo content with the supplied shipping manifest. As trade volurmes increase and regulatiors become more stringent, manual inspection by trairned operatos is less and less viable dus to low throusghput. Macline vision techniques can assist operators in their task by autormating parts of the inspection worlflow. Since cats are toutinely involvedin trafficking, export fraul, and tax erasion schermes, they represent an attractive target for autormated detection and flagging for subsequent irspection by operators. In this contribution, we deecribe a rmethod for the detection of cars in X-ray caep images based on trained-from-scratch Contolutional Neural Networlks. By introducing an oversarmpling scherme that suitably addresses the low nurmber of oar images available for training we achieved 100% car irmage classification rate for a false positive rate of 1-in -454. Cars that were partially or completely obscured by other oods,a mocdus operandi frequently adopted by ctirminals, were cotrectly detected.We believe that this level of performance sugeests that the method is suitable fordeployrment in the field. It is expected that the eneric object detection worlkflow described can be extended to other object classes gjven the avail ability of suitable trainingdata.Detectors
Transmission image
Container
X-ray source
time
time
Figure 1: Illustration of the X-ray image formation and acquisition processes. Photons emitted by
an X-ray source interact with a container and its content, leading to a signal attenuation measured
by detectors placed behind the container. By moving the container or the detector, attenuations are
determined spatially and are be mapped to pixel values to produce an X-ray transmission image
is in part made possible by the relatively constrained process of baggage scanning: scene dimensions
and complexity are both bounded by the small dimensions of a bag Multi-view(potentially volumet
ric), multi-energy, and high resolution imaging enable discriminating between threats and legitimate
objects, with the latter being mostly identical across different baggage
In contrast, the detection of threats and anomalies in X-ray cargo imagery is significantly more
challenging. Scenes tend to be very large and complex with little constraints on the arrangement and
packing of goods. Scanning is usually limited to a single view and the spatial resolution is much
lower than in baggage, making it especially difficult to resolve and locate small anomalous objects
Moreover, a very high fraction of items packed in baggage are well-cataloged (e. g. clothing), whereas
potentially any thing can be transported in a container making it impractical to learn the appearance
of frequent legitimate objects to facilitate the detection of threats. For these reasons, the performance
reported for cargo imagery is usually low
Zhang et al. [15 built a so-called"joint shape and texture model"of X-ray cargo images based on
Bow extracted in superpixel regions. USing this model, images were classified into 22 categories
depending on their content(e. g. car parts, paper, plywood). The results highlighted the challenges
associated with X-ray cargo image classification, with only 51% of images being assigned to the
correct category. In another effort to develop an automated method for the verification of cargo
content in X-ray images, Tuszynski et al. [5] developed models based on the log-intensity histograms
of images categorized into 92 high-level HS-codes(Harmonized Commodity Description Coding
System). A city block distance was used to determine how much a new image deviates from training
examples for the declared HS-code Using this approach, 31%o of images were associated with the
correct category, while in 65%o of cases the correct category was amongst the five closest matching
models
With around 20% of cargo containers being shipped empty, it would be of interest to automatically
classify images as empty or non-empty in order to facilitate further processing(e. g. avoid processin
empty images with object-specific detectors)and to prevent fraud. Rogers et al. [23] described a
scheme where small non-overlapping windows were classified by a Random Forest(rF)based on
Inlulli-scale oriented Basic Image Features (oBIFs) and intensity moments. In addition, window
coordinates were used as features so that the classifier would implicitly learn location-specific
appearances. The authors reported that 99.3% of Soc non-empty containers were detected as such
for a 0. 7%o false alarm rate and that 90% of synthetic images (where a single object equivalent
to IL of water was placed) were correctly classified as empty for 0.5 1%o false alarms. The same
problem was tackled by Andrews and colleagues [24 using an anomaly detection approach; instead
of implementing the empty container verification as a binary classification problem, a"normal"class
is defined(either empty or non-empty containers) and new images are scored based on their distance
from this"normal"class. Features of markedly down-sampled images(32x 9 pixel)were extracted
from the hidden layers of an auto-encoder and classified by a one-class svm, achieving 99.2%0
accuracy when empty containers were chosen as the"normal class and non-empty instances were
considered as anomalies
Representation-learning is an alternative to classification based on designed features, whereby the
image features that optimise classification are learned during training. CNNS, often referred to as
deep learning, are representation-learning methods [25 that were recently shown to significantl
outperform other machine vision techniques in many applications, including large-scale natural image
classification [26 While most examples of applications to X-ray imagery to date have been limited
to medical data 1271, Akcay et al. [28 recently demonstrated the use of CNNs for baggage X-ray
image classification. As there was insufficient training data to train a network from scratch, th
authors fine-tuned a variant of the alex Net architecture [29 that was pre-trained on ImageNet, a
dataset of natural images. This approach significantly outperformed prior work in the field, indicating
hat features learned from natural images do indeed transfer, at least to a certain degree, to X-ray
images
To our knowledge, CNNs have not been applied to X-ray cargo imagery. In this contribution, we
compare CNNs with other ty pes of features and determine whether trained-from-scratch models(e. g
trained only on X-ray images) perform better than pre-trained networks
4 Method
4.1 Dataset
X-ray transmission images of SoC cargo containers(typically 20 or 40 foot long) and tankers
transported on railway carriages were acquired using a Rapiscan Eagle(Rr60 rail scanner equipped
with a 6 MV linac source. Image dimensions vary between 1290 x850 and 2570 x 850 pixel
depending on the type of cargo and container size, with a pixel size
f≈6 Imm Pixe/1 in the
horizontal direction. The raw images are greyscale with 16-bit precision
For the purpose of this work, images containing at least one car(car images) are taken as the positive
class and images not containing any car(non-car images) as the negative class. The dataset contains
79 car images for a total of 192 individual cars. Car images can be broadly divided into 5 categories
(i)a single car on its own in a small container(20 ft long),(ii) two cars in a large container(40 ft
long),(iii)multiple cars stacked in a container, including one at an angle, (iv)a single car next to
unrelated goods(no overlap),(v)one or two cars placed in-front or behind other goods(partial or
complete occlusion). The specific car models and manufacturers were unknown, however based on
visual appearances sedans, SUVS, compacts, and sports cars were present in the dataset
Non-car images were randomly sampled from Soc images acquired over the course of several
months. These images can be of cargo containers and tankers, with the first type being the most
frequent. The nature of the cargo loads varies greatly from a container to another and include pallets
of commercial goods, industrial equipment, household items, and bulk materials. Approximately
20%o of the containers imaged were empty. Non-car images also include other types of vehicles such
as vans, motorbikes, and industrial vehicles(e.g. tractors, bulldozers)
4.2 Image pre-processing
Prior to classification, X-ray transmission images were pre-processed as previously described by
Rogers et al.[3023]. Black stripes resulting from source misfires or faulty detectors were first
removed. Variations in the source intensity and sensor responses were corrected by column- wise pixel
intensity normalisation based on air attenuation values which are considered invariant erroneous
isolated pixels(e. g excessively bright or dark) were replaced by the median of their neighbourhood
For certain experiments, the log transform of images was also computed as it is frequently used to
facilitate the detection of concealed items by operators and was also previously employed for the
automated classification of cargo images by tuszynski and colleagues [5
4.3 Classification scheme
The detection of cars in X-ray images was implemented as a binary classification task(Fig
A window-based approach was taken enabling i) to process optimally small sub-images for high
classification performance as well as low computational time and memory consumption, and ii)to
obtain approximate localisation of car-containing regions. Each window wi, densely sampled from
an image I, was classified and associated with a"car- likeness"score pu i. The image score pr, which
is indicative of the confidence that the image contains at least one car, was given by the maximum
value of pw, i across all wi of 1. The image was classified as car if pi>tCAR, and non-car otherwise
INPUT IMAGE
Pa,;}
pr≥to
W:
car image
Window
Window
Window
Feature Computation
Classifier
Aggregator
non-car Image
for each wi in/
Figure 2: A window-based scheme for the classification of large X-ray cargo images. Windows
are densely sampled from large input images and their features computed based on which their
car-likeness"score is assigned by a window classifier. An image score is computed as the maximum
window score across all windows of an image. Image class label (car or non-car)is obtained b
thresholding of the image
tCAR is a tunable threshold parameter that defines the balance between detection and false alarm
Two types of windows were evaluated: square 512x512 pixel and rectangular 350 1050 pixel. The
latter corresponded to the average size of cars in the training set and can be interpreted as a geometric
prior. In all cases, windows were sampled with a stride of 32 pixels and 64 pixels for training and
Inference, respectively.
Heatmaps for classification visualisation were generaled by mapping the mean window response
at all image locations to pixel values. Such visualisations are essential to clarify the decision of
the automated detection scheme and to enable verification by the operator before deciding whether
further actions (e.g. physical inspection) are required
Windows were classified by RE, SVm or logistic regression(for CNNs only) based on pixel intensity
fixed geometric image descriptors(BIFs), learned visual words(Pyramid Histograms Of Visual
Words, PHOW), and features extracted from CNNs
4.4 Window classification using Random Forest and Support Vector Machines
For this work, an open-source implementation of random Forest for matlab was employed If
not otherwise stated, classification was carried out using 40 trees, randomly sampling the square root
of the total number of features at each split during tree building, and using equal weights for the two
classes. For each window, the classifier outputs the " car-likeness""score pu, i computed as the fraction
of trees voting for the car class
Classification using linear SVMs was implemented using MATLAB's built-in functions. The bo
onstraint (or regularisation) parameters C and the kernel scale y were tuned empirically. The
car-likeness "score Par: i was computed using a function that maps uncalibrated SVM scores to
posterior probabilities. As proposed by Platt [31, a sigmoid was used as mapping function and
parameters were estimated post-training using 10-fold cross validation
In addition to rf and svm, softmax was also used for classification using Cnns as described in
section 4.6
4.5 Feature computation
The simplest type of features assessed for car image classification was intensity values(Sec. 4.5.1p
More advanced descriptors included oBIFs(fixed geometric features, sec. 4.5.2p and PHOW(learned
visual words, sec. 4.5.3). CNNS for feature computation and classification are described in section 4.6
4.5.1 Intensity features
Intensity features were encoded in multi-scale 256-bin histograms Input images were blurred by
convolution with a Gaussian kernel of standard deviation equal to l, 2, 4, and 8. The resulting feature
vector was 1024-dimensional. Histograms of intensity features were computed efficiently for a large
number of windows using the integral histogram method described by Porikli [321
Https //code. google. com/p/randomforest-matlab/-last accessed 31.05.2016
4.5.2 oriented Basic Image Features
BIFs encode textural information by classifying pixels of an image into one of seven categories
according to local symmetry 33]. BIFs were computed based on the response to a bank of derivative
of-Gaussian(DtG)filters [33 34 The scale-normalized response siy to the ij-th DtG Gii of scale
OB is shown in equation
σG*I
Intermediate terms are then calculated pixel-wise: A(equation 2) is the scale-normalised image
Laplacian and ?(equation 3) is a measure of the variance over directions of the second directional
de
derivative
=S20+S02
(2)
7-√(690+s02)2+4
The bif value for a pixel is an integer between I and 7 given by the index of the largest of the
following quantities:cs0,yso+s1,入,入,
+入-入
v2,7), With c being a threshold parameter
that dictates when a pixel is considered flat(i. e. with no strong local structure), which is one ty
of BIF. The remaining six BIFs are slopes, dark blobs, bright blobs, dark lines, bright lines, and
saddle-like(Fig. 3p
cabu r code■
Increasing scale O
Increasing
hm threshold
f
Window
∫c,={址4母桌业是吗
Figure 3: Computation of oriented Basic Image Features for window classification. oBIFs for the
input window are computed at multiple scales and for different threshold values. Histograms for each
combination of parameters are constructed and concatenated to produce the window feature vector
For clarity, orientation quantization is omitted from the schematic
The bif formulation can be extended by additionally determining the quantized orientation of
rotationally asymmetric features [35 This extended formulation termed oriented Basic Image
Features(oBIFs), has 23 features in total; with dark lines, light lines, and saddle-like types having 4
unpolarised orientations, while the slope type has 8 polarised directions. Implementations of both
BIFS and oBIFs in MAtLaB and Mathematica are available online [36
OBIFs were computed at four scales(oB=0.7, 1.4, 2.8, 5.6) for two threshold parameters
(y=0.011, 0.19). oBIFs were encoded in histograms of 23 bins per scale and per threshold value
resulting in 184-dimensional feature vectors per window. As for intensity features, OBlFs his
togram construction for multiple windows was carried out efficiently using the integral histogram
method [3
4.5.3 Pyramid Histograms Of visual Words
PHOW are a multi-scale extension of dense SIFT (Scale-Invariant Feature Transform) proposed by
Bosch et al. [37 38 Whereas sparse SIFT approaches compute scale and rotation-invariant image
descriptors based on local gradients at keypoint locations [39 dense SIft features are computed for
each pixel or on a regular grid with constant spacing [40 The latter approach makes sift descriptors
suitable for classification tasks where keypoints are not reliably detected or not consistent between
the images considered, which is the case for X-ray cargo images
PHOW computation(Fig 4 consists of three steps: i)dense SIFt computation, ii) visual words
quantization, and iii) spatial visual word histogram computation. SifT descriptors were extracted
at each location of a regular grid with a step of 3 pixels. siFT descriptors are spatial histograms of
mage gradient with 8 orientation bins and arranged in 4x 4 spatial bins centred at each grid location,
producing a 128-dimension feature vector per location. This extraction step was carried out at four
different scales (4, 6, 8, and 10 pixels) by varying the dimensions of the spatial bins. Images were
smoothed prior to computation, with Gaussian kernels of standard deviation equal to the scale divided
by 6. Descriptors were then quantized into 300 visual words that were learned by k-means clustering
of training image descriptors. A two level pyramid histogram of visual words(2 X2 and 4 x 4 spatial
bins) was constructed across all grid locations and scales, resulting in 6000-dimensional feature
vectors for each window
LEARNED VISUAL
WORDS VOCABULARY
Incrcasing sca c
普▲●}
■黃■D黄瓦■冒
Classifier
b●自●。●●
▲廿■
1黑鲁
●●圆▲
,
e gure 4: Computation of PHow features for window classification. SIFT descriptors are extracted
Fi
at multiple scales before being quantized into visual words. A two level pyramid histogram of
visual words is the constructed across scales. The feature vector is obtained by concatenation of all
individual visual word histograms
4.6 Convolutional Neural Networks
CNNs were implemented using the Mat Cony Net library [41. Two types of network were evaluated
both based on the very deep architectures proposed by Simonyan and Zisserman [42 The first one
is a 11-layer architecture( 8 convolutional layers and 3 full-connected layers), while the second is
a 18-layer architecture(16 convolutional layers and 3 fully-connected layers ). In both cases, all
filters in the convolutional layers had 3 x3 dimensions. details of the architectures can be found in
supplementary materials. The networks were regularised by batch normalisation, whereby the mean
and variance of layer inputs are fixed [43]. Batch normalisation performed significantly better than
the conventional regularisation approach that uses dropout layers [441
At the start of training. the learning rate was set to 104 and then to 105 when the validation
error stopped decreasing. Weight decay was fixed at 5x10. The average image computed over
the training set was subtracted from all input images. When window classification was carried out
solely based on CNNs, the"car-likeness"score pa, i was given directly by the output of the softmax
classifier. In some experiments, features extracted from the first or second connected layers(FCl
and FC2, respectively)were classified using Random Forest or SVM classifiers as outlined in 4.41
Only 512x512 square windows were considered for classification using features extracted from
CNNS. In order to make the memory footprint suitable for GPU processing, input images were first
down-sampled to 256 256 pixels and converted to 8-bit precision
In addition to models trained from scratch on windows sampled from X-ray cargo imagery, transfer
learning was also evaluated window features extracted from the fci and fc2 layers of the vgg
VD-19 model [42] pre-trained on Image Net were classified using Random Forest and SVM classifie
As VGG-VD-19 expects 224224 pixel RGB images as input, the grayscale channel of input X
images was replicated twice and downsampled, resulting in 3-channel 2 x 224 pixel images
4.7 Car oversampling
While potentially millions of non-car windows examples can be sampled from the Soc dataset, there
are only a total of 192 individual cars. Training a balanced classifier (i.e. 192 windows for each
A
B
Figure 5: Example of car windows over-sampling. Windows in green are over-sampled and red
windows indicate the user-annotated region of interest. Panels A and B show square window with
tROi=0.5 and rectangular windows with tROI =0.65, respectively
classes) would certainly lead to poor performance and generalisation a similar outcome would be
expected if a classifier was trained on a severely imbalanced dataset containing significantly more
non-car examples. Such issues are frequently encountered in machine learning and more recentl
with CNNs where performance and generalisation is contingent on the availability of suitably large
training datasets. Dataset augmentation by sampling random crops of input images at training was
shown to significantly reduce CNn overfitting in large scale image classification tasks [29 A similar
pproach was taken here
Issues related to the scarcity of car window examples were alleviated by over-sampling of car regions
at training. In addition to the user-defined ROl, partial car windows whose intersection with said
ROI was greater than a tHoi threshold value were also considered(Fig. 5p. This approach had
two advantages: i) it enabled training balanced classifiers with large number of examples, and ii
encouraged the classifier to be invariant to the position of the sampled windows in relation to the
cur ROI rOi was set to 0.5 for square 512x512 windows(Fig. 5 A) and to 0.65 for 350x1050
rectangular window, increasing the number of car windows examples available at training by factors
of≈140and≈50, respectively(FgSB)
4.8 Performance evaluation
Performance was evaluated on the classification of entire images as car or non-car based on ag
g
gregated window scores. Two assumptions were made: (i) non-car images(negative class)were
generally associated with lower pI values(image score)than car images(positive class); and(ii
achieving high detection rate on car images was trivial but doing so while minimizing false alarms
on non-car(e. g. high sensitivity, high specificity classification) is challenging. Non-car images
partitioned into disjoint training, validation, and test sets each comprising 10,000 Soc imager were
The performance evaluation scheme was identical across all combinations of features and classifiers
Leave-one-out cross-validation (LooCv) was used for the determination of pr for car images due
to the low number of examples of the positive class in the dataset. a classifier was trained using
windows sampled from 78 car images and the non-car training set before being used to infer pr for
the left-out car image. The pr for non-car validation images was computed using a classifier trained
on all 79 car images and the same non-car training images. All free parameters, including tcar,
were then tuned before repeating the process, with fixed parameters, using the non-car test images
Combining the pr values obtained for the negative class(hold-out on validation or test set)and
positive class (looC v), performance metrics such as the area under the roc curve(AUC)and the
H-measure could be computed. The latter was introduced by Hand and Anagnostopoulos [4.5 to
suitably accommodate imbalanced datasets, such as the one considered here, while also addressing
issues related to the underlying cost function of the AlC metric. Like the aUC, the H-measure can
8
be computed without having to explicitly set a value for the threshold parameter(here tcar).A
beta distribution with modes (T2 +1, 71+1) is used as distribution of relative misclassification
severities, where T2 and T1 are the relative frequencies of the positive and negative class, respectively
Details regarding the H-measure computation are given elsewhere 46 and implementations for most
scientific computing packages are freely available The false positive rate(FPR)was computed
by thresholding the test set pr scores using the highest possible value for tCAn(tuned individually
for each experiment based on validation images) that still resulted in 100% car image classification
During performance evaluation, dictionary learning for PHOW features and mean image computation
for CNNs were carried out solely based on training images(e. g. new dictionaries were learned and
new mean images were computed for each iteration of LOOCV)
4.9 Generation of synthetically obscured car examples
Synthetically obscured car images were generated by projecting non-car objects onto Soc car images
Due to the nature of the X-ray transmission image formation process, objects can be inserted into
images by multiplication as previously described by rogers et al. [23 The process started with a raw
car image. A first object was sampled from a database containing a total of 196 objects and placed at
a random location in the container The dimensions and density of the object were set to half and a
third of that of a typical car, respectively. The newly generated synthetic image was then classified
and the image score pr computed The mean relative attenuation of the car roi was computed as the
difference between the synthetic image and the raw image, divided by the raw image. This process
was repeated. adding more and more objects until the car was completely obscured (mean relative
attenuation equal to one). Five different realisations of this experiment were combined to generate a
plot of the image score versus mean relative attenuation
s Results
For each type of feature considered, the best car image classification results obtained across different
combinations of pre-processing, window geometry and classifiers are presented in table[ 1 It was
found that an approach combining multi-scale computation( scale=(1, 2, 4, 8)and encoding using
256-bin histograms(though diminishing returns were observed from 32-bin upwards )was optimal
for intensity features Log-transforming windows prior to analysis was found to be detrimental but
using rectangular windows(based on prior knowledge about car geometry) significantly improved
performance over square windows(H-measure of 0.95 and 0.86, respectively). However, intensit
features performed the worst when compared to other types of features with a false alarm rate above
5%0 while the differences in intensity distribution between car and non-car windows might be a
useful cue for classification, more advanced image descriptors such as PHoW and oBIFs were
required to achieve satisfactory levels of performance
PHOW features outperformed intensity features when using raw images as input and log-transforming
windows led to a further two-fold decrease in false alarm rate to approximately 1%o. Interestingly,
oBIFs outperformed PhoW features even though the former do not rely on ad-hoc dictionary learning
or a pyramidal scheme. Instead, oBIFs are fixed geometric descriptors computed independentl
at multiple scales. oBIFs results showed a N3-fold improvement in false alarm rate to 0.35%
when compared to PHOW features. Using BIFs instead of oB IFs led to a marked degradation
in performance, indicating that orientation quantisation was beneficial for classification. Log
transforming input windows also had a negative impact on classification using oBIFs, which was
potentially caused by the lack of apparent texture and structure in these transformed images
The best performance across all experiments, correct classification of all cars and a false positive
rate of 0. 22%(PI =0.990), was achieved using features extracted from the FCl layer of a trained-
from-scratch Cnn when square input windows were log-transformed and classification was carried
out using a random forest model. The 95%0 confidence interval for the detection rate, which was
estimated by supplementing the results with a single artificial failure case, was [0.96, 100
The 18-layer trained-from-scratch CNN outperformed the shallower 11-layer network in all cases
indicating that the former generalised well to unseen data despite significantly increased complexity
2http://www.hmeasure.net/-lastaccessed23.06.2016
Table 1: Performance for the detection of cars in X-ray cargo images. Only the best results for
each type of features shown. +log denotes that input images were log-transformed prior to features
computation. R and s denote 1050 X350 and 512x512 windows, respectively
Features
Windows Classifier H-measure FPR [ %1
Intensity(4 scales)
R
RE
0.900
5.20
PHOW (4 scales)+ Log
S
RE
0.977
1.05
oBIFs(4 scales, 2e)
R
RE
0.992
0.3:
CNN 11-layer+ Log
SM
0.990
0.47
CNN 18-laver(FC1)+log
RE
0995
0.22
Image Net VGG-VD-19(FC2)+ log
SVM
0.993
0.34
The second-best result was obtained using a cnn pre-trained on the imageNet dataset with no further
fine-tuning, which suggests that features learned from natural images constitute a robust baseline for
X-ray image classification
vI)
0.20.40.60.81.0
Figure 6: Classification outcome for non-obscured car images during leave-one-out cross-validation
(previously unseen by classifier). For each example, raw X-ray transmission in
additional red outlines indicating the location of cars)and the output of the classifier formatted as a
heatmap(bottom) are shown
Figure[6 shows representative examples of car image classification by the CNN scheme where
individual cars are not obscured by other goods. Various scenarios are shown: single cars without
other goods(Fig. 6i), multiple cars without other goods(Fig 6iv, v, and vi), car with other goods
(Fig. 6]i and ii), cars with other vehicles(Fig. 6v), and cars at an angle(Fig. 6vi). In all cases, cars
were also suitably localised by the heat map generated during classification regardless of the model
(e.g. sedan, coupe, station wagon, SUV) and dimensions. Regions of images that contained other
unrelated cargo usually gave very little to no signal(Fig.6i), with the exception of cases where said
cargo also included semantically-related objects, such as motorbikes(Fig. oiii)or vans(Fig. ov)
The cnn scheme also performed well for complex X-ray imagery in which cars were partially and
completely obscured by other cargo(Fig
7
The vast majority of non-car images(97.82% of the test set) had PISO. 5 and are thus correctly
classified using a naive tCAn=0.5 threshold(Fig.8p. These images typically include empty containers
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.