What do you want to do with your drone?


Thought I’d try and kick start a thread to see what y’all are doing out there, or want to do.
What kind of hardware are you playing with? New sensors for obstacle avoidance?
What kind of builds outside of the course? Hexacopter? Heavy payloads? Long duration flights?
What are the issues in reaching your goal?
How can we (the community) help with getting your there?


I am not a very consistent student. I have gone through the programming course a couple of times. My ultimate goal was to make an agri-drone for my own nano-vineyard to do occasional lite spraying of neem oil, and other viticultural tasks like foliar fertilization. I own the Navio2 because I started this a long time ago (about 4 years) and due to the expense, wish to stay with it. I have set up stand alone Ubuntu machines and the course instucted VirtualBox install in a Win64 workstation. Both from scratch. Both worked. I have two RPi’s and the GPS and power distribution cables from Emlid. I have yet to purchase the drone frame. There are now several bare bones agri-drone kits that look like what I will have eventually. These are quad/hex/and octo. I have put myself partway into two other books. First was AI for the Pi and another Autonomous Drone AI with RPi on board. The first was great, with moderate success but the later was returned. The support files were supposed to be downloadable and never materialized. Meanwhile I have one vintage RC gas airplane and another I am building. I recently acquired a Mac Powerbook (Intel) and have Windows (with Virtual Box Ubuntu) and MacOS on board. I did this due to the lack of MacOS GC softwares and wanting something portable. My problems with moving faster are the vineyard and my traveling schedule, and a fourth grandkid a mile away. It is hard for me to be consistent. But when I get into it, I do enjoy it.


right now I am playing with the bigger pixhawk kit drone. I have built it and my goal at the moment is more in the software side. Planning on creating a web application where I can see the drone’s telemetry, video stream, and live map. this would allow people to manage the drone from anywhere without using mission planner for example, can apply security settings, login, apply machine learning, save data into the cloud and other cool stuff.


These are two really cool projects which could gain traction with other prospective drone builders/users.
I hope you can share your progress with time. And of course maybe we can try and help along the way where we can.

As far as myself, I’d also like to learn more how ML/AI could influence drone applications.
My other projects include:
o A RPi zero W drone with video [https://dojofordrones.com/pi-zero-drone/].
o Drones with on-board environmental sensors that can map and send real-time data to the home base.
o An enclosed, stand-alone, solar-powered sensor array that stores environmental data which can be uploaded to a drone. Designed for remote or hard to access sites.
o Drone development using the latest ROS2, Gazebo Garden on current Ubuntu platforms [https://community.dojofordrones.com/t/full-gz-sim-on-22-04-lts/805].
o Continued development of various SITL programs to test programs before they are used in the field. Working with programs in the SITL before going in the field is essential to working out bugs, saving time, gaining insight how a program works and avoiding drone failures.


That sounds really cool!

I am currently trying to achieve the PL mission with my drone in the field. I am working out bugs in the software because I did not copy the code word for word, so I ran into issues. I will go back and probably copy the code word for word just to get determinate bevior, then add my own spice after.

I also ran into hardware issues, so I am testing different drone arms (official DJI F450 arms), props, and gps. I have not been able to run auto tune because I am getting yaw bias which I believe might be coming from the props or imperfections in the drone arms. If that doesnt patch the yaw bias, I will try new ESCs and motors. Plan C is scrap the frame and get a new one.

In any case, once I get really fine-tuned flight and perform precision lands smoothly, I will begin incorporating AI. My goal is to have very fine tuned search algos and I will probably start with the TSP or Dijkstra’s and work my way up to SLAM. I also plan on getting the google coral AI controller to give the drone more computer vision capabilities.

I will definetly keep posting follow ups and cries for help :rofl::sweat_smile::rofl:

PS: (If anyone wants to collab… im down!)


Once you are ready on the AI I will do it. Just lead the way! :smiley:


Sounds good! I was finally able to successfully complete my PL and it well good! I think I need to iron out a few kinks though. For example, whenever I execute the takeoff_and_land script and the drone reaches its target altitude, it bobs up and down (evident in the video), so I will be trying to debug that now. I dont think that it should be too hard to track down… have you experienced this type of behavior?

Other than that, I will start planning AI missions and comparing and contrasting edge TPU’s so that we can start running more sophisticated openCV scripts. So far, I’ve been looking at the Google Coral and the Arduino Nicla Vision as an option. With those, we can detect and classify stuff, so I think it would be really cool to integrate one. I would like to perform a precision landing by having the drone make a decision as to where to land, or something similar.

Currently, my drone relies on me to give it velocity commands to send it in the ArUco markers vicinity. However, what if when it takes off, it scans its forward facing environment and chooses where to go to land, via classification or object recognition. “Where to go to land” could be a table, near a certain color flag, or whatever we train it to be interested in. Once it gets there, it will still use the ArUco to land. Does this sound like something you would be interested in exploring?


Your first overshoot looked like it occurred because you blew by the target too quickly. Perhaps go slower or try an estimated geo-coordinate?

That is quite an idea and could be quite useful. Would need a test protocol to perhaps evaluate the receiver operating characteristic (ROC) curve for AI optimization. This would involve tests of specificity and sensitivity. Also for safety, would need flyaway protection and override procedures (eg, failsafe, fencing, motor braking) and when on AUTO mode, what about obstacle avoidance - maybe include a horizontal lidar?
I would be willing to try it but don’t know much about this subject.

– Jack