That is a very simple question, and the answer is unfortunately not.
Determining intentionality in other beings is an inductive practice, due to the limits of describability of multimodal experiences, including conscious states such as intentionality.
In one sense you could say that intentionality is the quality of an agent's behavior towards an objective with special regard to an expected outcome (e.g. I threw the ball with the intention that the dog fetch it.)
However due to the lack of dialog between man and robot, we cannot verify this intention verbally. It is not likely that the researchers made a robot that can answer the question, "What did you intend by this action?" In fact, the capacity to even act with special regard to an expected outcome implies a certain amount of metacognition.
In the end, we can predict how the robots will act by assigning intentionality to their actions by saying "the robot is trained to want this outcome, so it will do x behavior", but we have no good evidence that they are intentional agents.