Java: Vision

From Deep Blue Robotics Wiki
Revision as of 22:46, 17 November 2016 by Ben shimota (Talk | contribs) (Created page with "Important Notes on the Camera Any dashboard will not be able to view the camera image unless anonymous viewing is enabled.Web-server username and password are both "FRC"Video...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Important Notes on the Camera

Any dashboard will not be able to view the camera image unless anonymous viewing is enabled.Web-server username and password are both "FRC"Video is transferred via MJPEG as a series of JPEG images pushed by the "server" (i.e., the camera) at the default frame rate, until the connection is broken. WPICamera, used by SmartDashboard camera extension, uses JavaCV's FFMpegFrameGrabber User's manual HTTP protocol Camera Bandwidth Limitations

According to the FMS white Paper, there is a 7 megabit per second limit on bandwidth. This means that if we are going to be using two cameras whose feeds will be switched between, we'll have to limit the camera bandwidth. While the camera is plugged in, using Firefox or some other internet/file explorer, go to It will ask for a username and a password. This will bring up a live feed from the camera. Go to the "setup" page, go to the "video" pulldown on the left, and go to "video settings." On that page, there will be a check-box for "include text"; check the box and type "#b" in the text box. Still on that page, there will be a pulldown for resolution. Change the resolution to 320x240 pixels, and then click "save" down at the bottom of the screen. When you go back to the live video feed page, there will be a small video image, as would be expected from turning down the resolution; there will also be a number at the top left of the video image. This is the number of kbps of bandwidth being used by the camera. It will be fairly large, but still well under the 7 MBit/s limit.

When changing the frame rate and compression of the camera keep in mind that the SmartDashboard will assume that a camera with less than 26fps is not connected and will either take a very long time to display the camera image or it will not show it at all.

Table SFX Camera Widget

When using the camera in SFX, set the URL for the camera widget to, assuming the IP address for the camera is Otherwise, the camera is the same as regular SmartDashboard. Multiple Cameras (Two)

Having more that one camera can be extremely useful but it has some requirements to work. The first is just that the camera settings must be tweaked, lower resolution and fps/compression rate, to lower the bandwidth taken up by both. The second is that the SmartDashboard has a very odd bug when it displays two camera images; If their compression rate is greater than forty or their compression rates are equal and greater than 30 the SmartDashboard will often crash. Switching between cameras but only having one displayed at a time is one solution that can be used.

Using the custom widgets located in Team100Extensions we can switch between two cameras. Team100Camera is the camera display, CameraToggle is a button that switched which camera Team100Camera takes its image from. The widgets use a custom version of WPIJavaCV.

The camera that is not currently displaying its image on the widget stores it's images in an internal buffer. When the widget then accesses the images from the camera the entire buffer is emptied causing the widget to loose connection with the camera. The Team100Camera Widget will fix the connection issue automatically(Usually takes about 3-4 seconds).

When switching between the two cameras there is a short period of time when the old camera is still sending some data and the new camera is starting to. There is a bandwidth spike at that point so it is recommended to have both cameras added together to not exceed the bandwidth cap or at a minimum be close to it. Vision Processing

For automatic target aiming, it will be necessary to process the image from the camera. This can be done in a variety of ways: 1) on the cRIO, 2) on the Driver's Station computer, 3) on a separate dedicated computer located on the robot. At this point, we're only considering the first two options. For processing images on the cRIO, we would use the vision routines provided by National Instruments. For processing images on the Driver Station computer, two possible options would be to use OpenCV or a new program provided to FRC teams called RoboRealm. Either of these two would have to work in conjunction with the SmartDashboard and NetTables to communicate results back to the cRIO. In either case, be aware that there is significant lag between the time the camera takes and image and the time the analysis results are available (up to a second depending on where the image is being processed). Therefore, care must be taken in using the vision results. One possible way to use vision results is to compute the magnitude of the angular misalignment of the robot using the image and then correcting the alignment using the signal from the gyro. Vision Sample Program

Sample cRIO vision code is provided in NetBeans for the 2013 FRC game. To load this sample, in NetBeans, choose File>NewProject. Choose 2013VisionSampleProject from the Samples/FRC Java category. In addition to creating the java project, the sample project also creates a folder called VisionImages (under the new NetBeans project directory) with sample images of the Ultimate Ascent targets Vision Processing on the cRIO

The best way to test out image processing strategies using the cRIO is to first use the National Instruments Vision Assistant (Start>All Programs> National Instruments > Vision Assistant 2012). This is a program that runs on the PC and was installed as part of the FRC Utilities LabVIEW update. All of our programming ThinkPads at WHS have this program installed. You can use the sample images from the folder of the2013VisionSampleProject program. Also in the VisionImages folder is a file called 2013VisionScript.vascr which is a NI Vision Assistant script that determines target locations. Each of the functions in the Vision Assistant have corresponding calls in the cRIO NI vision library. Vision Processing on the Driver Station Computer Using RoboRealm

The FRC donation from allows us to install the software on up to 5 team machines. Currently, the only machine with this software installed is #16. In addition, anyone who is interested can install a 30-day evaluation onto their own computer. Generic tutorials are available on the RoboRealm site. In addition, they have specific information for the 2013 Ultimate Ascent Game, The FRC tutorials give a number of suggestions of how to compute the target x-y locations and distance as well as how to use NetTables to get the resulting computed values back to the cRIO. They also provide a sample script with functionality similar to the one provided for the NI Vision Assistant.

Ms. Rhodes has experimented with RoboRealm and was able to get it to work. However, there MIGHT be some conflicts between RoboRealm and the SmartDashboard (the SmartDashboard would not display some variables and values from RoboRealm even though they showed up on TableViewer).

Vision Processing For 2016 Stronghold

For the 2016 build season, FIRST released a new and "easier" program named GRIP to do easy vision processing. The only issue was that this software only works with camera that are put up on a network rather than the webcam that we had that didn't work with GRIP. Another issue with this program is that on the RoboRio, the software took too much memory. (aka 50% of memory taken by GRIP)

My solution to this was to use NIVision's software they provided as it processed the images in the way that we needed, in this case we only had an RGB filter that filtered out the bright pixels that were reflected by the retro-reflective tape and bright blue lights. Check out the code on github here.