First Joint Integration Workshop
In July 2012, the partners of the RoboHow consortium met in the labs of the IAS group from TUM for the first joint integration workshop of the project. The main goals of this workshop were to foster mutual understanding between the partners, agree on inter-workshop interfaces, and coordinate the work towards the first demonstrator.
The workshop started on Monday July 16th, took a week, and finished on Saturday July 21st. The final version of the agenda of the workshop can be found (here) – last update July, 5th.
Infos about the iTASC Workshop can be found here.
ROS Workshop Slides
The ROS workshop consisted –among others– of a tutorial session providing developers new to ROS with a head start to the system. Afterwards, we had a session on best practices in ROS and how to share code/software within the project. The slides can be accessed below:
Pair-wise partner integration
Individual discussions in sub-groups yielded the following conclusions/
agreed-upon collaboration efforts:
CRAM-iTaSC: an initial version of an interface for exchanging motion specifications and feedback was agreed upon and will enable the CRAM high-level system gain more insight into the iTaSC-based motion specification and execution
iTaSC-SoT: the involved partners decided to first integrate the SoT solver 'hsot' into the iTaSC solver to start porting some of SoT's features into the iTaSC framework
EPFL-TUM: a dynamical model for reaching an object from previously recorded human motion data was successfully learned, and kinesthetically demonstrated motion data from the PR2 robot in the context of opening a drawer were recorded; both parties agreed to exchange software and jointly work on porting the CDS and SEDS to the Gazebo simulator to facilitate software exchange within the project
FORTH-TUM: Within the RoboHow WS FORTH and TUM discussed, agreed and started working on two tasks to realize within the first year of the project. First, we will create an interface between FORTH's vision-based tracking system and TUM's knowledge processing framework which allow us to ground logical predicates within observations of human demonstrations. Second, we will adapt the video annotation tool developed at TUM to label recorded video sequences of FORTH which will be used for learning.
Seminar Room 5001, in the 5th Floor
Contacts: firstname.lastname@example.org or lisca[at]cs[dot]tum[dot]edu