ABSTRACT era of robotics comprises of more intelligent

ABSTRACTOurdaily tasks require us to extract data from outside atmosphere, and performoperations on the basis of useful information obtained from raw data. Werespond in favor of accomplishment of our tasks. There are basically threesteps involved in it (input, processing, and output). The similar task can beperformed by a smart robot.Theproject idea is to design an autonomous robot which will identify (know thereis an object of interest), recognize (find out the ‘actual’ meaning of theobject by Object Recognition) and respond in accomplishment of pre-specifiedtask by getting its input with the help of sensor e.g.

camera (PiCamera). Theobject intended for detection will be signs e.g. a sign which consists of anarrow which points in the ‘left direction’ or ‘right direction’ or ‘U turn’ or’back’ etc. Therewill a planned arena for the robot to follow in which different signs (sayleft, right, U turn) along with a target object (say a circular ball) will begiven. The robot will be able to reach the target object by following thedetails extracted from the various signs introduced along its course.Therobot will make use of computer vision to extract relevant data from outerenvironment.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

      CHAPTER 1: INTRODUCTION1.1       Background Word robot was coined by a Czechnovelist Karel Capek in a 1920 play titled Rassum’s Universal Robots(RUR).   Robot in Czech is a word forworker or servant.In the present era, robots playcrucial role in mass production due to their agility and accurate performance. Futureera of robotics comprises of more intelligent robotic structure, that is much compactand have even faster processing with wide range of applications from home-based  automations to industrial  machinery and  from war-crafts  to intelligent surveillance robots.

Robotshave been widely used in automobile industries by the end of late 20th centuryfor precise cutting, welding, placing and carrying purposes. Robotics hasachieved its greatest success to date in the world of industrial manufacturing.Robot arms, or manipulators, comprise a 2 billion dollar industry. 1.2       Problem StatementThe project is based on one of thefundamental problem that a robot experiences while interacting with theenvironment. A robot may require a manual feedback in order to perform itsrequired task.

The manual intervention is somehow a barrier in achieving goodresults and accuracy. The human intervention can be minimized by giving visualfeatures to the robot in order to achieve diverse behavioral response underdifferent circumstances. We are using CV technique to realize the solution ofthis particular problem.1.2.1    Computer VisionCV can be briefed as “a field thatinvolves methods for obtaining, processing, and interpreting images and, ingeneral, multi-dimensional data from the outside world in order to generatemathematical or symbolic information, e.g., as a sequence of decisions.

Themain idea is the implementation of the abilities of human vision byelectronically processing and understanding an image. This image analyzing canbe seen as the disintegration of numerical information from image data usinganalytical methods extracted with the help of geometry, statistics, physics,and learning theory. Humans employ their eyes and brains to lookand visually understand the world around them. CV is the science that intendsto give a similar compatibility to a machine. CV deals with the automaticabstraction, processing and comprehension of useful data from a single image ora set of images (video clip). It involves the implementation of a numerical andalgorithmic method to achieve automatic visual understanding. CV is concerned with the theory behind artificialsystems that extract information from images. The image data can take variousformats, such as video sequences, views from multiple cameras, ormulti-dimensional data from a medical scanner.

As a technological discipline, CVintends to apply its schemes and illustrations for the implementation of CVsystems.” CV can befurther categorized in to MV. MV is the approach used to provide imaging-basedautomatic inspection and analysis for applications such as automatic handling,process control, and robot guidance, usually done in industries. MV tends tofocus on utilizations, mainly in manufacturing and inspecting, e.

g.,vision based robots and systems for vision based inspection, measurement, orpicking (robotic arms).1.3       Project ObjectivesThe smart robot (able to detect and follow signs) designimplementation is above subsystem level and almost none of the software orhardware is taken from scratch. The controllers (both master and slave) areavailable for purchase along with other periphery (chassis, sensors, actuators)and OpenCV is released under a BSD license and hence it’s free for bothacademic and commercial use.

Our goal is to make all the controllers andperipherals to work simultaneously in order to achieve detection and tracking.The objectives of the project are as follows:·       Implement OpenCV with Python.·       Compare the raw image to pre-specified images and detect the signfrom a list of signs.·       Choose the necessary action after recognizing a particular sign(i.e. Turn Left, Right, Backwards or Stop etc.).·       Reach the destination efficiently.

1.4       Project ScopesThe scopes of study are as follows:·       Using Raspberry Pi Model B+ V1.2 as master processor (objectdetection and tracking).·       Using Arduino Uno as slave processor (drive motor and obstacleavoidance). It will have a serial communication with Raspberry Pi and will beused to send commands to the drive motors.

The Arduino will also read inputsfrom the ultrasonic sensors and sends that information to the Raspberry Pi.·       Using PiCamera to capture real world image or video (set of images)·       Using Dual-H-Bridge L298 as DC motor driver.·       Using Ultrasonic sensor for obstacle avoidance.

·       Using DC motor for robotic movement.                         CHAPTER 2: EXPERIMENTAL SETUP2.1       ComponentsIn  this section  the  specific components  that  were utilized  during  the experimentation and testing process aredescribed with their principles of operation in the same order they weretested.2.

1.1          Raspberry PiThe Raspberry Pi is a series ofsmall single-boardcomputers developed inthe UnitedKingdom bythe RaspberryPi Foundation topromote the teaching of basic computer science in schools and in developingcountries. We are using Raspberry Pi forimage processing purposes. The microprocessor model is RPi B+ V1.

2. We use thismodel as it has more GPIO pins, lower powerconsumption, and neater form factor than other models. Figure 1: Raspberry Pi (Top View) 1A brief overview of RaspberryPi Model B+ V1.2 is given bellow:·       Dual step-down (buck) power supply for 3.

3V and 1.8V5V supply has polarityprotection, 2A fuse and hot-swap protection.·       New USB/Ethernet controller chip.·       4 USB ports.·       40 GPIO pins.·       3.5mm ‘headphone’ jack.·       MicroSD card socket instead of full size SD.

·       Four mounting holes in rectangular layout.·       Many connectors moved around.2.1.2      Arduino UnoArduino is a brandthat produces development boards, kits and opens source software for the small and commercial projects. It is used widely aroundthe Globe. We selectedArduino for the precisemovement of our robot on the basis of signals obtained from RaspberryPi andultrasonic sensors.

Arduino will control the motion of robot with the help ofmotor driver. We selected Arduino because ofour low cost requirement with reusability and markettrust as major factors. There are other project boards available in market likeSTM development kits, TI Cortex M3/M4 boards having more IO’s than Arduino Unoboard we used, but since Arduino is meeting our requirements therefore, weselected it. Figure 2: Arduino Uno Board 2Lookingat the board from the top down, this is an outline of what you will see (partsof the board you might interact with in the course of normal use arehighlighted): Figure 3: Arduino Uno Elaborative Diagram 3Startingclockwise from the top centre:·      AnalogReference pin (orange)·      DigitalGround (light green)·      DigitalPins 2-13 (green)·      DigitalPins 0-1/Serial In/Out – TX/RX (dark green) – These pins cannot be used fordigital I/O (digitalRead and digitalWrite) if youare also using serial communication (e.g. Serial.begin).·      ResetButton – S1 (dark blue)·      In-circuitSerial Programmer (blue-green)·      Analog In Pins 0-5 (light blue)·      Powerand Ground Pins (power: orange, grounds: lightorange)·      ExternalPower Supply In (9-12VDC) – X1 (pink)·      Toggles ExternalPower and USB Power (place jumper on two pins closestto desired supply) – SV1 (purple)·      USB(used for uploading sketches to the board and for serial communication betweenthe board and the computer; can be used to power the board) (yellow).

Operating Voltage 5V Input Voltage (recommended) 7-12V Input Voltage (limit) 6-20V Digital I/O Pins 14 (of which 6 provide PWM output) PWM Digital I/O Pins 6 Analog Input Pins 6 DC Current per I/O Pin 20 mA DC Current for 3.3V Pin 50 Ma SSFlash Memory 32 KB (ATmega328P) of which 0.5 KB used by bootloader SRAM 2 KB (ATmega328P) EEPROM 1 KB (ATmega328P) Clock Speed 16 MHz Length 68.6 mm Width 53.

4 mm Weight 25 g Table1: Arduino Board Specification 12.1.3          Raspberry Pi Camera·      Smallboard size: 25mm x 20mm x 9mm ·      A 5MP(2592×1944 pixels) Omni vision 5647 sensor in a fixed focus module·      Support1080p30, 720p60 and 640x480p60/90 video record Figure 4: Raspberry Pi Camera42.1.4          Ultrasonic Sensors ·      Provides  precise, non-contact  distance measurementswithin a 2 cm to 3 m range ·      Ultrasonic  measurements work  in  any lighting condition,  making  this a  good choice  to supplement  infrared  object detectors ·      Simplepulse in/pulse out communication requires just one I/O pin ·      Burstindicator LED shows measurement in progress ·      3-pinheader makes it easy to connect to a development board, directly or with anextension cable, no soldering required Figure 5: Ultrasonic Sensor 52.1.

5    Dual H-BridgeThe termH-Bridge is derived from the typical graphical representation of such acircuit. An H bridge is built with four switches (solid-state or mechanical). Sincethe IC packaging contains two set ofH- Bridges, hence the name “Dual H-Bridge”. When the switches S1 and S4 areclosed (and S2 and S3 are open) a positive voltage will be applied across themotor.

By opening S1 and S4 switches and closing S2 and S3 switches, this voltage is reversed,allowing reverse operationof the motor. Figure 6: Switches based H-Bridge for driving a DC motor bothways (FWD/ BWD) 6Using the nomenclature above, the switches S1and S2 should never be closed at the same time, as this would cause a shortcircuit on the input voltage source. The same appliesto the switches S3 and S4. This conditionis known as shoot-through. We use adual H bridge readily available in the market was utilized and the problem wewere experiencing is solved as this H- bridge is capable of providing +5 and -5volts with a maximum of 2.

5 A as load current. Moreover, this Bridge circuitmodule is capable of driving two motors simultaneous with a single module basedupon L298 IC.2.1.6    Mechanical StructureThe mechanical structure is a two layer acrylic fiber.

Figure 7: Mechanical Structure 7CHAPTER 3: WORKING PRINCIPLE3.1     Algorithm·      At first, the robot will start revolvingabout itself in search of a sign at the location where it started. The Pi camera will check if it sees any sign, everytime it pauses revolving for a short time. If it won’t find the sign within a certainamount of rotation it will translate somefinite distance and again startsearching.

It willstart approaching thesign after detecting it.·      Once the robot has found a sign, it needs to approachit. It will do this by ?rst centering the sign in theview. Once it center’s the sign, it will drive straight towardsit. Every now and then, it will checkagain to make sure the sign is still at the centerof the view and adjustif it isn’t. Also, it will constantly check distance to the sign. Once it reachesa distance where the sign is closeenough, it will proceedto the next stage of color detection.

·      Once the robotis close enoughto look at the sign,it will need to ?gureout the meaningof the sign. It will use image thresholding and contour detection to determine the shape. Onceit detects the sign, it willmake a move according to the sign.·      Search for the new signand repeat movements highlighted 2 and 3 until it finds thelast sign.·      After recognizing the last sign (i.e. a sign of a ball),the bot will move in search of a sphericalball and when the ball is identified, the bot will be go to the ball and play a buzzer at the end indicating the completion of the route.

3.2       ReferenceSigns CHAPTER 4: PYTHON CODEThe code is under development.                                                                                         CHAPTER 4: REFERENCES1       https://www.raspberrypi-spy.co.uk/2014/07/raspberry-pi-b-gpio-header-details-and-pinout/2       https://store.arduino.cc/arduino-uno-rev33       https://store.arduino.cc/arduino-uno-rev34       https://www.raspberrypi.org/products/camera-module-v2/5       http://www.dx.com/p/new-hc-sr04-ultrasonic-sensor-distance-measuring-module-3-3v-5v-compatible-for-arduino-416860#.WltUBbynE0Y6       https://en.wikipedia.org/wiki/H_bridge7       http://www.goldstem.org


I'm William!

Would you like to get a custom essay? How about receiving a customized one?

Check it out