Senior Technical Trainer
FDM Group, Toronto, ON
I earned my Ph.D. in Cognitive Robotics at Umeå University, Sweden, where my doctoral research was conducted within the framework of the Marie Curie Initial Training Network INTeractive RObotics (INTRO) as an Early Stage Researcher. My primary research focus centered on the development of robot learning techniques derived from Learning from Demonstration and Imitation, leveraging various facets of human-robot interaction. My academic journey began with a Bachelor's degree in Computer Science and culminated in a Master's degree in Artificial Intelligence and Robotics, awarded in 2003 and 2006, respectively. During my undergraduate studies, I honed my programming and problem-solving skills, ultimately leading to the design and implementation of the first Iranian intelligent humanoid robot (Firatelloid).
Over the past two decades, I have had the privilege of collaborating with dynamic and accomplished teams, which have profoundly shaped my professional trajectory. My roles have spanned from full-stack developer and technical manager to researcher, primarily within the domains of AI and Robotics. Additionally, I have had the honor of instructing and mentoring recent graduates from esteemed institutions in Canada and the United States. My instructional responsibilities encompassed contemporary programming paradigms, prevalent enterprise application development frameworks, industry-standard software development practices, the construction of AI-based solutions, and the cultivation of professional ethics.
Currently, I serve as the Chief Technology Officer and Co-Founder of AiZtech Labs Inc. Our company pioneers a groundbreaking category of digital health technology known as Selfie Diagnostics, a smartphone-based platform capable of screening various medical conditions by analyzing the eye and facial topography solely through the smartphone's camera. Our innovative solution necessitates no chemical agents or specialized equipment.
During my leisure hours, you may find me immersed in the acquisition of new languages or engaged in playing musical instruments, such as the Setar, Piano, or Flute, among others. Additionally, I possess a deep-seated passion for music composition, having successfully produced three albums to date.
My motto in life is very simple:
FDM Group, Toronto, ON
Umeå University, Computing Science Department
Humboldt-Universität zu Berlin, Informatics Department
Aptech Institute, Tehran IIDCO Branch
Ph.D. in Cognitive Robotics
Master of Science in Artificial Intelligence and Robotics
Tehran IA University
Bachelor of Science in Computer Hardware Engineering
The imitation technique is heavily utilized within robotics, in this case often denoted Learning from Demonstration (LfD) or Imitation Learning (IL). A human demonstrates a wanted behavior by tele-operating a robot. In this research, a robot will be equipped with a set of parameterized high-level behaviors. Models and techniques for ways in which behaviors can be combined into sequences will be developed. By recognizing a behavior on-the-fly during a demonstration, a shared control system can be constructed. In this way, the robot may take over control already during the demonstration. The human will be relieved of the hard task of tele-operating but may, if needed, interfere and correct the robot's control signals.
The research on believable agents focuses on creating interactive agents that give users the illusion of being human. Application domains included among others human-computer interaction, interactive entertainment, and education. Believability is accomplished by convincing the humans interacting with the agents express their emotions in their behavior and equip the agents with clearly distinguishable personalities. This has consequences for the agent's internal model of deliberation. It has to have knowledge of both emotions and know how these can be expressed. If social behavior is of importance in the domain in which the agents act, the agents also have to have knowledge of norms and values and how these may be expressed. Rationality means acting appropriately in various situations. This topic has been a subject of many studies and searches that have produced remarkable turn outs. However, it enables applications to have more believable interactions between man and machine which is the most important consideration of these agents. Combination of emotions, rationality and personality will yield to believable agents. A constructed agent can be assigned with so many different roles. The guide in a museum, assistant professors or the actor on a stage are very good examples that should be thought "credible" from who finds itself to interact with them. A well-formed agent is an agent which makes decisions according to its perceptions, rational and emotional states. Because they simulate men's rationality, it would be necessary to learn from their own experiences, manifest personality and eventually modifying their features. In agents of this type rationality, emotions, personality and behavior are closely legacies between them and their influence in many ways. This job, then, inquires the simplest relations that have taken place between these various aspects and also on the basis of the reading of previous studies.
The fundamental objective of this project was the improvement of interaction between man and machine mainly in the domains applied to humans in which it is not centralized on the oral communication. In this kind of project, the visual mediums are the main channels of interaction between the man and robot.
Automated robot navigation via GPS has high error rates especially by using low quality receivers. This project headed for optimizing the navigation process and reducing the error rates using Genetic Algorithms.
Firatelloid (First Iranian Intelligent Humanoid Robot) is the first Iranian android which has won the best Iranian scientific award, Khwarizmi Young Award, and paved the way for other Iranian roboticists to build humanoid robots. Please check Firatelloid's official website by clicking here.
Designing and developing mobile robot controller with various functionalities including:
The need for intelligent monitoring systems for financial markets especially Foreign Exchange has become a necessity to keep track of this market. Financial markets conform to some mathematical concepts and cause it to be analyzed with different Artificial Intelligence (AI) algorithms. Data Fusion has been applied in different fields and the corresponding applications utilize numerous mathematical tools. Jimbo Forex Project headed for applying data fusion techniques in order to support trading decisions based on technical analysis in Foreign Exchange Market.Features
Jimbo Forex Project has embedded the most powerful Artificial Intelligence algorithms to help traders make better trading decisions. JFP is a signal generator which monitors current market state by checking different indicators values depending on chosen strategy.
One of the most malignant and popular cancers in the world is breast cancer. This research project headed for finding a solution in order to calculate the probability of having breast cancer and the need for mammography by means of expert systems and decision trees methodologies.
This project was intended for understanding speech and analyzing sentences using grammars, semantics and SAPI. Vocabulary can be categorized to different groups such as verbs, nouns, adjectives etc. All this information will be stored and retrieved from the database.
My publications are as follows:
Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.
Robotics is tightly connected to both artificial intelligence (AI) and control theory. Both AI and control based robotics are active and successful research areas, but research is often conducted by well separated communities. In this paper, we compare the two approaches in a case study for the design of a robot that should move its arm towards an object with the help of camera data. The control based approach is a model-free version of Image Based Visual Servoing (IBVS), which is based on mathematical modeling of the sensing and motion task. The AI approach, here denoted Behavior-Based Visual Servoing (BBVS), contains elements that are biologically plausible and inspired by schema-theory. We show how the two approaches lead to very similar solutions, even identical given a few simplifying assumptions. This similarity is shown both analytically and numerically. However, in a simple picking task with a 3 DoF robot arm, BBVS shows significantly higher performance than the IBVS approach, partly because it contains more manually tuned parameters. While the results obviously do not apply to all tasks and solutions, it illustrates both strengths and weaknesses with both approaches, and how they are tightly connected and share many similarities despite very different starting points and methodologies.
In many robotics shared control applications, users are forced to focus hard on the robot due to the task's high sensitivity or the robot's misunderstanding of the user's intention. This brings frustration and dissatisfaction to the user and reduces overall efficiency. The user's intention is sometimes unclear and hard to identify without some kind of bias in the identification process. In this paper, we present a solution in which an attentional mechanism helps the robot to recognize the user's intention. The solution uses a priming mechanism and parameterized behavior primitives to support intention recognition and improve shared control for teleoperation tasks.
Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field.
Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire.
The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills.
Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms.
The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.
In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.
INTRO is a 4 years project dealing with the development of integrated capabilities for autonomous robots. The work presented here covers a scenario of field robotic assistant for Search and Rescue applications. This was carried out as an integration project between academic and industrial partners, and demonstrated on a mobile outdoor robot equipped with a manipulator.
Learning techniques are drawing extensive attention in the robotics community. Some reasons behind moving from traditional preprogrammed robots to more advanced human fashioned techniques are to save time and energy, and allow non-technical users to easily work with robots. Learning from Demonstration (LfD) and Imitation Learning (IL) are among the most popular learning techniques to teach robots new skills by observing a human or robot tutor.
Flawlessly teaching robots new skills by LfD requires good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to correctly learn new skills. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions is essential for correctly and completely replicating the behavior with the same effects on the world. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the repertoire.
Considering identified main challenges, the current thesis attempts to model imitation learning in robots, mainly focusing on understanding the tutor's intentions and recognizing what elements of the demonstration need the robot's attention. Thereby, an architecture containing required cognitive functions for learning and reproducing high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. This is further applied in learning object affordances, behavior arbitration and goal emulation.
The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of goals and what to look for in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability is specified.
This paper gives a brief overview of challenges in designing cognitive architectures for Learning from Demonstration. By investigating features and functionality of some related architectures, we propose a modular architecture particularly suited for sequential learning high-level representations of behaviors. We head towards designing and implementing goal based imitation learning that not only allows the robot to learn necessary conditions for executing particular behaviors, but also to understand the intents of the tutor and reproduce the same behaviors accordingly.
In this paper we present an approach for high-level behavior recognition and selection integrated with a low-level controller to help the robot to learn new skills from demonstrations. By means of Semantic Network as the core of the method, the robot gains the ability to model the world with concepts and relate them to low-level sensory-motor states. We also show how the generalization ability of Semantic Networks can be used to extend learned skills to new situations.
Most of the humans day to day tasks include sequences ofactions that lead to a desired goal. In domains which humans are replacedby robots, the ability of learning new skills easy and fast plays animportant role. The aim of this research paper is to incorporate sequentiallearning into Learning from Demonstration (LfD) in an architecturewhich mainly focuses on high-level representation of behaviors. The primarygoal of the research is to investigate the possibility of utilizingSemantic Networks in order to enable the robot to learn new skills insequences.
Financial intelligent monitoring system is emerging new research area and also has great commercial potentials. Traditional technical analysis relies on some statistics including technical indicators to determine turning point of the trend. Despite the fact that financial markets conform to some mathematical concepts and cause it to be analyzed with different Artificial Intelligence (AI) algorithms, this paper headed for applying Induced Ordered Weighted Averaging (IOWA) operator in order to support trading decisions based on technical analysis in Foreign Exchange Market.
During the past decade, the Financial Services industry has been transformed though the applications of computer-based analytics for forecasting, product design, portfolio optimization, risk management and intelligent advisory systems. This paper introduces trading system for financial markets, which is designed for improving trader's decision making process. The paper headed for applying Uncertain Ordered Weighted Averaging (UOWA) operator as a decision making algorithm (DMA) and compare the results with regular trading (Manual) based on technical analysis in Foreign Exchange Market.
The need for intelligent monitoring systems for financial markets especially Foreign Exchange has become a necessity to keep track of this market. Financial markets conform to some mathematical concepts and cause it to be analyzed with different Artificial Intelligence (AI) algorithms. Data Fusion has been applied in different fields and the corresponding applications utilize numerous mathematical tools. This paper headed for applying Ordered Weighted Averaging (OWA) operator in order to support trading decisions based on technical analysis in Foreign Exchange Market.
This paper focuses on designing a goal based rational component of a believable agent which has to interact with facial expressions with humans in communicative scenarios like teaching. One of the main concerns of the proposed model is to define interactions among rationality, personality and emotion in order to fulfill the idea of making rational decisions with emotional regulation. Our research aims are directed towards improving decision making process by means of applying Data Fusion techniques, especially Ordered Weighted Averaging (OWA) operator as a goal selection mechanism. Also the issue of obtaining weights for OWA aggregation is discussed. Finally the suggested algorithm is tested and results are provided with a real benchmark.
This research project is headed for designing a rational believable agent with a goal based rational- emotional architecture which has to interact with humans in communicative scenarios by facial expressions. The proposed model defines interactions among rationality, personality and emotion to make rational decisions with emotional regulation and improve decision making process by means of applying Ordered Weighted Averaging (OWA) operator as a goal selection mechanism.
All Intelligent creatures that we know of have emotions. Humans, in particular, are the most expressive, emotionally complex, and socially sophisticated of all. There have been many computational models of artificial emotions using different techniques to implement the concept of emotional agents through these years. In this paper, we propose applying Ordered Weighted Averaging (OWA) operator in the procedure of both internal and environmental assessments of releasers, which all are considered with respect to the robot’s wellbeing and its goals according to the architecture of the MIT’s sociable robot (Kismet).
This thesis is heading for creating a believable agent which able to decide rationally and use emotions to regulate decisions that has been made. As we look inside the general architecture, it is obvious that agents which are equipped with emotions will deliberate processes that assumed to be their best choices. The rationality means acting appropriately in the various situations. However, it enables applications to have more believable interactions between man and machine which is the most important consideration of these agents. Combination of emotions, rationality and personality will yield to believable agents. The fundamental objective of this research work is the improvement of the decision making algorithm using Data Fusion mainly in the domains applied to humans in which both rationality and emotions have effective roles in decision making process.
Passed with the highest grade (20/20).
I have taught several courses in computer science including programming languages, software engineering, microcontrollers, artificial intelligence and robotics.
This course gives an introduction to different Deep Learning models and how to implement them using TensorFlow.
This course focuses on building Enterprise Applications with Java and covers the most popular Java enterprise frameworks including: Servlets, JSP, JPA, Hibernate, Spring and Spring MVC. The course also covers building RESTful services and microservices with Spring Boot.
This course gives an overview of fundamental and advanced topics in Java.
This course reviews the most popular Java enterprise frameworks including: Servlets, JSP, JPA, Hibernate, Spring and Spring MVC.
This course reviews the most popular Java enterprise frameworks including: Struts, Spring, Hibernate, Ant and Maven.
This course gives an overview of core Java.
This course contains topics in J2EE including: GUI design with Swing, JDBC, RMI, Networking and Beans.
This course contains topics in J2EE including: Servlets, JSP, JSTL & EL, EJB and JSF.
This course contains integeration of XML and Java, XMLBeans, AJAX and Java web services.
This course gives an introduction to C and object oriented programming with C++.
This course gives an introduction to Atmel 8051 microcontroller.
I have appeared in public media in both Sweden and Iran including live and recorded TV and radio programs. Below is the list of my recent interviews.
This section contains pictures I took during my trips.