文件名称:
向人类学习如何抓取:数据驱动的架构 拟人软手自主抓握
开发工具:
文件大小: 547kb
下载次数: 0
上传时间: 2019-10-20
详细说明:软手是将顺应性元素嵌入其机械设计中的机器人系统。这样可以有效地适应物品和环境,并最终提高其抓握性能。如果与经典的刚性手相比,这些手在人性化操作方面具有明显的优势,即易于使用和坚固耐用。但是,由于缺乏合适的控制策略,它们在自主控制方面的潜力仍未得到开发。为了解决这个问题,在这项工作中,我们提出了一种方法,可以从观察人类策略开始,使软手能够自主地抓握物体。通过深度神经网络实现的分类器将要抓取的物体的视觉信息作为输入,并预测人类将执行哪些操作来实现目标。因此,此信息用于从一组人类启发的原语中选择一个,这些原语将软手姿势的演变定义为预期动作和基于触摸的反应性抓握的组合。该体系结构的硬件组件包括用于观察场景的RGB摄像头,7自由度操纵器和柔性手。柔性手在指甲处装有IMU,用于检测与物体的接触。我们使用20个对象对提出的体系结构进行了广泛的测试,在111个抓取过程中,成功率为81.1%。Before going through the details of these two components
Tb|o80.020.00000.000000.00000.0000
we briefly describe the phases of primitive extraction and
labeling from human videos
Top lcft0.01000.00000000.000.00000000000
a) Dataset creation and human primitive labeling: W
Top right0.0200210.60.000.000.000.00000.00000
collected 6336 first person RGB videos(single-object, table-
top scenario), from 11 right-handed subjects grasping the 36
Bottom080002a9000a00000000000a00
objects in Fig. 2. The list of objects was chosen to span a
wide range of possible grasps, taking inspiration from [16]
Pi0.000000.001098000.020.000.000
During the experiments, subjects were comfortably seated
in front of a table, where the object was placed. They were
s Pinch lefto00030.00.000.030:860.080.000.00000
asked to grasp the object starting from a rest position(hand
Pinch right!0.010000.00000.020020.350000.00000
on the table, palm down). Each task was repeated 4 times
deo.30.000.00.000.00000000.970.00000
from 4 points of view(the four central points of the table
edges). To extract and label the strategies, we first visually
Fip40.00.000.00000030000.070000.90000
inspected the video and identified ten main primitives
Lateral0.020.000.030.000.000000.00000.0095
Top: the object is approached from the top with the
palm down parallel to the table. Object center 1
approximatively at the level of the middle phalanx. When
Predicted label
contact is established, subjects close simultaneously all
their fingers, achieving a firm power-like grasp
Fig 4. Confusion matrix summarizing the performance of the proposed deep
classifier on the test set. Each entry shows the rate at which the primitives
Top left: same for the
grasp, but with the palm identified by the row labels are classified as the ones identified by the column
rotated clockwise of at least T/9 radians
labels Rate is also color coded, from low rate coded with white to high rate
Top right: as for the top grasp, but with the palm rotated coded with dark greenl
counter-clockwise of at least T/9 radians
concise description of human behavior, without any claim of
Bottom: the object is approached from its right side. The exhaustiveness. Note that the selection of the action primitive
palm is roughly perpendicular to the table, but slightly is not only object-dependent but also configuration dependent
tilted so that the fingertips are more close to the object This is clear for the left/right modifier. Consider for example
than the wrist. When the contact is reached, the hand a bottle; if placed on its base it triggers a lateral grasp, while
closes with the thumb opposing the four long fingers. when laying down on its side induces a top grasp
This primitive is used to grasp large and concave objects
The first frame of each video showing only the object in
e. g. a salad bow
the environment was extracted, and elaborated through the
Pinch: same as for the top, but the primitive concludes object detection part of the network(see next subsection)
with a pinch grasp
The cropped image was then labeled with the strategy used
Pinch left: same as for the top left, but the primitive by the subject in the remaining part of the video. This is the
concludes with a pinch grasp
dataset that we used to train the network
Pinch right: same as for the top right, but the primitive
concludes with a pinch grasp
A. Object detection
Slide: the hand is placed on the object from above as to
Object detection is implemented using the state of the art
push it toward the surface. Maintaining this hand posture, detector YOLOv2 [17]. Given the RGB input image, YOLOV2
the object is moved towards the edge of the table until it produces as output a set of labeled bounding boxes containing
partially protrudes. A grasp is then achieved by moving all the objects in the scene. We first discard all the boxes
the thumb below the object, and opposing it to the long labeled as person. we assume that the target is localized close
fingers. This strategy is used to grasp objects whose to the center of the image. Hence, we select the bounding
thickness is smaller compared to the other dimensions, box closest to the scene center. Once the object of interest
such as a book or a compact disk
is identified, the image is automatically cropped around the
Flip: the thumb is used together with the environment bounding box, and resized to 416 x 416 pixels(size expected
on one side and the index and/or the middle on the by the subsequent layer). The result is fed into the following
opposite one, to pivot the object. The item rotates of block to be classified
about T/ 2 and then it is grasped with a pinch. This
strategy is used to grasp small and thin objects, as a
B. Primitive classification
coin
a) Architecture: Instead of building from scratch a
Lateral: the same as for the top grasp, but the palm is
completely new architecture, we follow a transfer learning
perpendicular to the object during the approaching phase. approach. The idea is to exploit the existing knowledge
This strategy is used to grasp tall objects like a bottle learned from one environment to solve a new problem, which
The choice of these primitives was done taking inspiration
is different yet related. In this way, a smaller amount of data is
from literature [161,[13], and to provide a representative yet sufficient to train the model, and achieve high accuracy with
a short training time. We select as starting point Inception-V3
Illustrative videos can be found here: goo. gl/nmgxK7
[18 trained on the ImageNet data set to classify objects from
images. We keep the early and middle layers and remove
(b)
the softmax layer. In this way, we have direct access to the
highly refined and informative set of neural features that
Inception-v3 uses to perform its classification. It is important
to note that the object signature is not one-to-one but it aims
t extracting high level semantic descriptions that can be
applied to objects with similar characteristics. On the top o
the original architecture we add two fully connected layers
(d)
containing 2048 neurons each(with relU activation function
These layers operate an adaptive non-linear combination of
the high-level features discovered by the convolutional and
pooling layers, further refining the information. In this way,
the geometric features are implicitly linked each other to serve
as the base for the classification The output of the last fully
connected layer is thus fed into the softmax, which produces
a probability distribution over the considered set of motion Fig. 5. Four significant relative object-hand postures assumed by the hand
primitives. We chose the one with maximum probability as during the approaching phase. Starting from these initial configurations
output of the network
the hand translates until a contact is detected by the IMUs. Directions of
translation are perpendicular Lo the table for top and pinch primitives, anld
b) Training and validation: We use the labeled dataset parallel to it for lateral and bottom
described above to train the network. The parameters of
the two fully connected layers at the top of the Inception-
model learning hyper-parameters, i.e. learning rates Xft E
v3 architecture are trained from scratch. while the original
0-3,10-4,10-5,10} and Xtr∈{10-2,10-3,10-4
parameters of the network are fine-tuned. To this end we
dropout probability Pdrop E10.4, 0.5, 0.61, number of epochs
impose layer-specific learning rates. More specifically, we
n(10, 20, 30, 40, and the batch size in (10, 20, 30, 40.The
freeze the weights in the first 172 layers over the total
training time for each network ranged from I to 5 hours. We
249)of the pre-trained network. These layers capture indeed
elected the configuration that provided the highest f1-score
universal features like curves and edges that are also relevant
accuracy [19 on the validation data set -whic
to our problem We instead use the subsequent 77 layers to
The selected hyper-parameters are Aft=10-5, Atr=10-3
capture dataset-specific features. However, we expect the pre- Pdrop=0.5, 30 epochs, and batches size 20
With such parameters, the network is able to classify the
rained weights to be already good if compared to randoml
nitialized ones. Hence, we avoid to abruptly change then
primitives in the test set with an accuracy ranging from 86%
using a relatively small learning rate Xft. Finally, given
to 100%, depending on the primitive, and 95% on average
that the weights of the two last fully connected layers are
Fig. 4 shows the normalized accuracy of the classifier for all
trained from scratch, we randomly initialize them and use
ten classes. Visually inspecting the results reveals two main
a higher learning rate Atr w.r.t. the one we use in previous
causes behind the occasional failures of the network The first
layers. We further reduce the risk of over-fitting by using
one is a limitation in the problem formulation itself, which
dropout; before presenting a training sample to the network
makes intrinsically not possible to achieve 100%o classification
we randomly disconnect neurons from its structure ( actually
accuracy. Indeed, it seldom occurs that the same object in
this is implemented by masking their activation). Each neuron
the same configuration is grasped in two different ways by
is removed with probability Pdrop. In this way, a new topology
two subjects. This happens for example for the coin, which is
through a fi
is produced each time the network is trained, introducing is used instead The second cause is connected to the fact
variability and reducing the production of pathological co-
adaptation of weights. We use Keras library for network
that using only a single rGB image, the network someti
design and training. All the procedures were executed trough
misinterprets the object size. This could, for example, lead
an NVIDIA Tesla M40 GPu with 12GB of on-board memory
to predict a top grasp rather than a bottom grasp for a bowl,
since this object may be interpreted as a ball-like item. In
To verify the generalization and robustness of primitive future work we will consider the use of a stereo camera to
classification, we use hold out validation. The goal is to prevent this
s ISsue
estimate the expected level of model predictive accuracy
independently from the data used to train the model. We split
IV. ROBOTIC GRASPING PRIMITIVES
our data set in: 70% objects for training, 20% objects for In [20], Johansson and edin affirm that the Central Nervous
validation and 10%o for testing. We maintained a balanced System "monitors specilic, more- or-less excpected, peripheral
number of objects per class among over the three data sensory events and use these to directly apply control signals
sets. We trained 30 different network configurations using that are appropriate for the current task and its phase
the cross entropy cost function to adjust the weights by These signals are
d (i.e. anticipatory, or
calculating the error between the output of the softmax layer feedforward). Driven by this observation, we decided to
and the label vector of the given sample category. Each implement the robotic grasping strategies relying mostly on
configuration was obtained by varying the most relevant anticipatory actions. To do this, we took inspiration from
TA
INITIAL ORIENTATION Qo ANd NORMALIZED DIRECTION OF APPRoACh d
FOR EACH PRIMITIVE
Strate
000.711000.703
00
Top left
0.26906570-0.27210.6496].00
op right
0.269-0.657-0.272-0.649100-1
Bottom
0.145-0.6960.7010.030
010
Pinch
0.0840.8160.170.458
00-1
Pinch left
0.1160.7330.4830.463
0-1
14
1860.890-0.1100.400
00
Slide
0.00.7110.00.703]
00-1
Lateral
0-100
010
17
the selected primitive, and dictated by the aim of heuristically
reproducing as close as possible the human behavior observed
in the videos Fig. 5 shows photos of the hand in t=0 for top
Fig. 6. Set of objects used in the experimental validation. None of them pinch, lateral and bottom grasps. Tab. I summarizes directions
was part of the set used during training. A 30cm ruler is present in all the
photos to help in qualitatively understanding object sizes
of approach and initial orientations for all the primitives
the visual inspection of the videos described in the previous C. Grasp phase
section, and decided to trigger primitive execution by specific
he grasp phase is when the grasp actually happens, and
events. The first event is generated by the detection of an thus where the primitives differentiate more from each others
object and scene classification. This triggers one primitive When not differently specified, translations and rotations are
among all the available ones. We do not consider here flip, here expressed in hand coordinates
which can not be implemented by the soft hand that we
a) Top and lateral grasps: The reactive grasp framework
use in this work. As a trade-off between perfo
rmance and leverages on a dataset of 1 3 prototypical rearranger
nts
complexity, we divide all primitives in two phases: 1)approach of the hand, extracted from human movements. In [11],a
and i1)reactive grasp The transition between the first and
subject was asked to reach and grasp a tennis ball while
the second phase is triggered by a contact event, detected as maneuvering a Pisa/IIT SoftHand. The grasp was repeated 13
an abrupt acceleration of the fingertips (as read by IMUs). times, from different approaching directions. The user was
A. Experimental setup
instructed to move the hand until the contact with the object,
and then to react by adapting the hand/ wrist pose w r.t. the
While the proposed techniques are not specifically tailored object Poses of the hand were recorded through a Phase Space
on this specific setup, It is convenient to introduce it here to motion tracking system. We subtract from the hand evolution
simplify the description of the next subsections(see Sec. II). recorded between the contact and the grasp(T represents
The robotic architecture is composed of two main components: the time between them)the posture of the hand during the
a KUKA LWR-IV arm, and a Pisa/llT SoftHand [15] as end contact. The resulting function 4; [0. TI-R7 describes the
effector. This anthropomorphic soft hand has 19 degrees of
rearrangement performed by the subject to grasp the object
freedom, and only one degree of actuation. The intelligence Acceleration signals a1 . 13: [0, T]>R were measured
embodied in the hand mechanics is to be considered as an too through the IMUs. To transform these recordings into
integral part of the control architecture itself, rather than as a a grasping strategy, we considered the acceleration patterns
simple effector to act onto the environment. A RGB camera is as a characteristic feature of the interaction with the object
laced on the top of the manipulator to simulate a first-person When the pisa/lIT SoftHand touches the object, IMUs read
point-of-view. The robotic hand is equipped with IMUS for an acceleration profile a
0,⑦→R. The triggered sub
contact detection, triggering reactive strategies for grasping. strategy is defined by the local rearrangement Aj, with
The principal reference frames used in our control framework
are depicted in Fig. 3
arg max
B. Approuch phase
O T(T)ai (r)dT
During the approach phase, human hand tends to follow
When this motion is completely executed, the hand starts
straight lines connecting the starting position and the target
losing until the object is grasped This procedure proved
Its effectiveness in preliminary power grasp experiments on
[21]. We reproduce this behavior through the simple trajectory oojects approached similarly as specified here by the toy
p
r(t)=o+dt, Q(t)=Qo
(1) primitive [11]. We extend here its use to top left, top right
and lateral strategies
coordinates, and Q E R its orientation as quaternion, both previous section, when a contact is detected we rotate h
where E Io is the hand base frame position in Cartesian
b) bottom: To mimic human behavior described ir
expressed in global coordinates. O E R3
and Qo E r hand along x of T/3 and translate 300mm along y In this
are the initial position and orientation, while de r is the way the palm base moves over, and the thumb can enter into
direction of approach. All these three quantities are defined by the concave part of the object during hand closure
Fig. 7. Photoseyuences of grasps produced by the proposed architecture during validation: Panels(a-h) present a Lop grasp of object 12, panels (i-P)
a top-left grasp of object 5, and panels( q-x) a top-right grasp of object 16. Panels(a-b)depicts the approach phase. In(c)the contact is detected and
classified using (2). In panels(c-f) the hand finely changes its relative position w.r. t. the object, as prescribed by the reactive routine, and grasps it. In(g)
and(h) the item is firmly lifted
(e
Fig&. Photosequence of a grasp produced hy the proposed architecture during validation: Bottom grasp of object 14. The hand starts from the initial
configuration of the primitive in panel(a). The contact happens in panel (b), triggering the reactive routine. In panel (f) the object is firmly lifted
c)Pinches: In pinch, left pinch and right pinch strategies evaluated by the KUKA embedded controller. All the contro
the hand just closes without changing its pose
and sub-strategies implementation were performed in ROs
d) Slide: To mimic the human behavior we realized an
V. EXPERIMENTAL RESULTS
anticipatory routine composed of the following sub-phases
triggered by the initial contact with the object and the
We test the effectiveness of the proposed architecture by
environment: i) apply a force on the object along x axis performing table-top object grasping experiments. A table
to maintain the contact during sliding, by commanding a is placed in front of the system, as depicted in Fig 3. The
reference position to the hand 10 mm below the contact object is placed by an operator approximatively in the center
position; ii) slide the object towards the edge of the table, of the table. RGB information from the web-cam triggers
i) unload the contact to avoid pushing the object out of the scene classification through the proposed deep neural network,
table, by translating 10 mm along x, iv) rearrange the hand which is followed by primitive execution. The task is repeated
to favor the grasp, by translating 100mm along X and 50mm
three times The exact position of the object and its orientation
along Z, and rotating along y of T/12 radians, v) close the vary each time, the first in a circle of radius 100mm, the
hand
second in the full angle range. All the process is repeated for
each of the 20 objects depicted in Fig. 6, chosen so as to elicit
D. Control
different grasping strategies. Objects number 5, 6, 7, 8, 9, 10, 16
and 19 are classified with a different strategy depending on
A Jacobian based inverse kinematic algorithm is performed their positioning and orientation. We consider three tests for
to obtain desired joint positions gr from the prescribed end each possible classification. The total amount of grasp tested
effector evolution. a joint-level impedance control is used is 1 11. None of the selected objects was used during the
to realize the motion, with K=10 rad as stiffness and network training phase
D=0.7
Nms
damping for each joint. The control law is Tab. Il summarizes the results in terms of the primitive used,
T(t)=ke(t+ De(t)+D(a, 9), where T are the applied successes and failures for each object. The overall grasping
joint torques, e=gr-g and e= g are the error at joint level success rate is 81.1%0. A grasp was considered successful if
and its derivate. D is a compensation of the robot dynamics the robot maintained it for 5 seconds(after which the hand
TABLE II
STRATEGY USED SUCCESSES AND FAILURES FOR EACH GRASP
Object Strategy Successes Failures Object Strateg
Successes Failure Object Strategy Successes Failure
0
7
pinch right
0
bottom
lateral
inch
slide
3223323332332
pinch left
0
pinch ri
9
pinch
top right
top left
pinch left
bottom
top right
pinch right
2
18
slide
01100000
lateral
0
19
0
top left
l
32233333232
top left
top right
op right
0
pinch
pinch left
13
bottom
the two works is prevented by the fact that neither this nor
the other paper used a standardized object set and protocol
[22, [23. With this as premise, it is worth noticing that
our success rate is only fairly lower than the one in [12]
(which reports 87% of successes, versus the 81% reported
here). However, in our work, we considered a higher number
of objects for the testing phase(20 versus 10), spanning a
wider range of shapes and with larger differences w r t. the
learning set. Another interesting consideration arises from a
Imore in-depth analysis of the results. If we remove from the
statistics the three objects that would require a pinch grasps
0)
当(k
(i.e. 7, 8, 9)the success rate jumps over 88%0. This can be
explained by an intrinsic feature of the soft hand we used,
which was designed to perform power grasp Nonetheless
using the environment as an enabling constraint, the end
effector can still partially overcome this limitation. We are
sure- and we will test it in the future that using other
versions of the softhand that can execute both pinch and
Fig 9. Photosequences of grasps produced by the proposed architecture power grasping see e. g. [24], the success rate will increase
during validation: panels (a-d) present a pinch grasp of object 7, panels (e-h)
a pinch-left grasp of object 8, and panels (i-l)a pinch-right grasp of object
9. Panel (a shows the hand initial configuration. The contact is established
VIL. CONCLUSIONS
in panel (b) through interaction with the environment, which also guides the
hand towards the grasping achieved in panel (c). In(d) the object is firmly
In this work, we proposed and validated a data-driven
lifted
human-inspired architecture for autonomous grasping with
automatically opens ). Note that objects 12 and 15 elicit only soft hands. We achieve this goal by: i)introducing a novel
the top grasp primitive, independently from their orientation
deep neural network that processes the visual scene and
They are indeed both(almost-)rotationally symmetric, so the predicts which action a human would perform to grasp an
classifier does not take in account their orientation to select object from a table, ii) formulating and implementing an
the grasp
artificial counterpart of the strategies that we observed in
Looking instead at
primitive-specific success rates we humanS, iii) combining them together in a integrated robotic
obtain: Top 857%(Fig. 7(a-h)), Top left 73.3%(Fig. 7 platform, iv)extensively testing the proposed architecture in
(i-p), Top right 100%(Fig. 7(q-x), Bottom 100%(Fig 8), the execution of Ill autonomous grasps, achieving an overall
Pinch 55. 6%(Fig 9(a-d)), Pinch left 5.5.6%(Fig 9(e-h), success rate of 81.1%. Future work will be devoted to testing
Pinch right 66.7%(Fig 9(i-e), Slide 83.3%(Fig. 10), Lateral the use of SoftHand 2 [24] and RBO hand [25] within this
867%(Fig.11)
framework, both fulfilling the requirements of softness and
anthropomorphism
VI. DISCUSSION
This work represents a substantial improvement w.r. t [11
REFERENCES
where a similar success rate was obtained for human -robot
[1 A. Bicchi and V Kumar, Robotic grasping and contact: A review, in
handover, while only exploratory tests were performed on
ICRA, voL. 348. Citeseer, 2000, p. 353
us grasping. It is worth mentioning that this paper [2] L. Birglen, T: Laliberte, and C. M. Gosselin, Underactuated robotic
represents-together with [12]-the first work that validates
hands. Springer, 2007, vol 40
over a large set of objects a combination of deep learning
33] C Piazza, G. Grioli, M. Catalano, and A. Bicchi, " A century of robotic
hands, Annua! Review of ConiroL, RoboticS, and Autonomous Systems
echniques and soft hands. Any formal comparison between
vol. (In Press ), 2019
a)
L(b
( )i
()
Fig. 10. Photosequence of a grasp produced by the proposed architecture during validation: slide grasp of object 3 Panels(a-c)depicts the approaching
phasc. In panels(d-c) thc environment is exploited to guidc the object to the tablc cdgc. In panels(f-g) the hand changes its rclativc position w.r. t. the
object so to favor the grasp, which is established in panels(h-i). In panel g the item is firmly lifted
d灬
11. Photosequence of a grasp produced by the proposed architecture during validation: lateral grasp of object 4 Panels(a-c) present the approaching
se. In panel (c) contact is detected, and in(e) the grasp is established. The object is lifted in panel (f)
[4]C. Erdogan, A Schroder, and O. Brock, Coordination of intrinsic and [15] C Della Santina, C. Piazza, G. M. Gasparri, M. Bonilla, M. G Catalano,
extrinsic degrees of freedom in soft robotic grasping, in 2018 IEEE
G. Grioli, M. Garabini, and A. Bicchi, The quest for natural machine
International Conference on Robotics and Automation(ICRA). IEEE
motion: An open platform to fast-prototyping articulated soft robots,
2018,Pp.1-6.
IEEE Robotics Automation Magazine, vol. 24, no. l, pp 48-56
5 M. Haas, W. Friedl,G. Stillfried, and H. Hoppner, Human-robotic
2017
variable-stiffness grasps of small-fruit containers are successful even [16] C. Eppner, R Deimel,J. Alvarez-Ruiz, M. Maertens, and O. Brock,
under severely impaired sensory feedback, Frontiers in neurorobotics,
"Exploitation of environmental constraints in human and robotic
Vol.12,p.70,2018.
grasping, The International Journal of robotics Research, vol. 34,
6X. Yan, J. Hsu, M. Khansari, Y Bai, A. Pathak, A Gupta, J. Davidson
no.7,pp.1021-1038,2015
and H Lee, Learning 6-dof grasping interaction via deep geometry- [17 J. Redmon and A. Farhadi, "Yolo9000: better, faster, stronger, arXiv
aware 3d representations in 2018 IEEE International Conference on
preprint, 2017
Robotics and Automation(ICRA). IEEE, 2018, pp. 1-9
[18 C. Szegedy. V. Vanhoucke, S offe, J. Shlens, and Z. Wojna, Rethink
[7] P Schmidt, N. Vahrenkamp, M. Wachter, and T. Asfour, Grasping of
ing thc inception architecture for computcr vision, in Proceedings
unknown objects using deep convolutional neural networks based on
of the Ieee conference on computer vision and pattern recognition
depth images, in 2018 IEEE International Conference on Robotics
2016,pp.2818-2826
and Automation(ICRA). IEEE, 2018, pp. 6831-6838
[ 19] Y. Yang and X. Liu, "A re-examination of text categorization methods
[8]A Zeng, S Song, K.-T.Yu,E Donlon, F.R. Hogan, M. Bauza, D Ma,
in Proceedings of the 22nd annual international ACM SIGIR conference
O. Taylor, M. Liu, E. Romo et al., " Robotic pick-and-place of novel
on Research and development in information retrieval. ACM, 1999
ubjects in cluter with nulti-affordance grasping and cross-domain
p.42-49
image matching, "in 2018 IEEE International Conference on Robotics [20] R.S. Johansson and B. B. Edin, Predictive feed-forward sensory
and Automation(/CRA). IEFF, 2018, pp. 1-8
control during grasping and manipulation in man, B/OMEDICAL
[91 A. Gupta. C. Eppner, S Levine, and P. Abbeel, Learning dexterous
RESEARCH-TOKYO, vol. 14, pp 95-95, 1993
manipulation for a soft robotic hand from human demonstrations, in [21] T. Flash, The control of hand equilibrium trajectories in multi-joint
Intelligent Robots and Systems(IROS), 2016 IEEE/RS/ International
arm movements, "Biological cybernetics, voL. 57, no. 4.5, pp 257-274
Conference on. IEEE, 2016, pp. 3786-3793
1987
[10] T Nishimura, K. Mizushima, Y. Suzuki, T. Tsuji, and T. Watanabe, [22] J. Leitner, A. W. Tow, N. Sunderhauf, J. E. Dean,. w. Durham,
Thin plate manipulation by an under-actuated robotic soft gripper
M. Cooper, M. Eich, C. Lehnert, R Mangels, C. McCool et al., "The
utilizing the environment, in Intelligent Robots and Systems (IROS).
acrv picking benchmark: A robotic shelf picking benchmark to foster
2017 IEEE/RS International Conference on. IEEE, 2017, Pp. 1236-
producible research, "in Robotics and AutomatiOn(ICRA), 2017 IEEE
International Conference on. IEEE, 2017, pp. 4705-4712
III M. Bianchi, G. Averta, E. Battaglia, C. Rosales, A. Tondo, M. Poggiani, [23]B. Calli, A Singh,J. Bruce, A. Walsman, K. Konolige, S St
invasa
G. Santera. s. Ciolli. M. G. Catalano. and a. Bicchi. "Tactile
P. Abbeel. and A. M. dollar. ""Yale-cIllu-berkeley dataset Tur robotic
based grasp primitives for soft hands: Applications to human-to-robot
manipulation research, The International Journal of robotics research
handover tasks and beyond, " in Robotics and Automation(ICRA), 201
vol.36,no.3,pp.261-268,2017
TEEE International Conference on. IEEE. 2019
[24 C. Della Santina, C. Piazza, g. Grioli, M. G. Catalano, and A. bicchi
[12] C. Choi, w. Schwarting, J. DelPreto, and D. Rus, Learning object
Toward dexterous manipulation with augmented adaptive synergies
grasping for soft robot hands, " IEEE Robotics and Automation Letters,
The pisa/iit softhand 2, IEEE Transactions on Robotics, no 99, pp
[13 T. Feix, J. Romero, H.-B. Schmiedmayer, A M. Dollar, and D Kragic, [25] R Deimel and O. Brock, " A novel type of compliant and underactuated
The grasp taxonomy of human grasp types, IEEE Transactions on
robotic hand for dexterous grasping, The International Journal of
Human-Machine Systems, vol 46, no I, pp 66-77, 2016
Robotics Research, vol 35, no. 1-3, pp. 161-185, 2016
[14 G. A. Miller, E Galanter, and K. H. PribraIll, Plans and the structure
of behavior Adams Bannister Cox, 1986
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.