Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utilize the Stereolabs API in full #6

Open
jruths opened this issue Feb 5, 2025 · 8 comments
Open

Utilize the Stereolabs API in full #6

jruths opened this issue Feb 5, 2025 · 8 comments
Assignees

Comments

@jruths
Copy link
Contributor

jruths commented Feb 5, 2025

The stereolabs API has lots of useful built in functionality. This task is about itemizing the options that it provides, figuring out if we want to use them, and pulling them into our stack.

@jruths jruths converted this from a draft issue Feb 5, 2025
@jruths
Copy link
Contributor Author

jruths commented Feb 5, 2025

To get started, talk with @MtGuerenS

@MtGuerenS
Copy link

Hey Justin, What functionality are you looking to pull into our stack?

@taetkyle taetkyle moved this from ✏️ Todo to 🛠️ In-Progress in 🚗 Hail Bopp Feb 18, 2025
@jruths
Copy link
Contributor Author

jruths commented Feb 18, 2025

@MtGuerenS @taetkyle Part of this task is really to summarize what can the stereolabs API do - what steps can we offload to this API. One concrete thing is that I believe the API has some mechanism to help:

  1. Calibrate the 4 cameras relative to each other, and
  2. Stitch the 4 cameras into a single image

Those would be a great place to start.

@taetkyle
Copy link

02/17/2025 Update:

  • Status: Research the function that the zed API can do, running the zed built-in function for stitching four cameras

  • Next Steps: Fix the two errors that I got by running the Zed 360 function, and run the function without any problem. Running the Zed camera using python script by utilizing API.

  • Projected Completion: Feb 24th

  • Update: I learned that the calibration is for calibrating a single camera. From the Zed 360, I learned that it provides human detection in 360 using four Zed cameras. However, I figured out that only one of the cameras is working properly for the Zed 360 function, so I ran a camera status analysis and found two errors: one indicates the camera is running, even though it is not running, and the other direct the error in AI sector.

@taetkyle taetkyle removed the status in 🚗 Hail Bopp Feb 24, 2025
@taetkyle taetkyle moved this to 💭 Help-Needed in 🚗 Hail Bopp Feb 24, 2025
@taetkyle
Copy link

02/23/2025 Update:

  • Status: Saw that the cable that connects the Jetson Orin and monitor is broken

  • Next Steps: Ask hardware people whether it is possible to fix the location of the Jetson Orin so that it would be much easier to plug the cable in or out. Get a new cable. Fix the two errors that I got by running the Zed 360 function, and run the function without any problem. Running the Zed camera using Python script by utilizing API.

  • Projected Completion: Mar 2nd

  • Update: I was not able to work until Sunday, because of a career fair, interview, and exam. To finish the week's task, I went to the lab on the first floor, but the cable was broken, so I was not able to work with the Zed camera. Therefore, I tried to utilize a quad-computer. I tried to use my own account, but I encountered the "park is not in the sudoers file. This incident will be reported." error while downloading the vscode.

@taetkyle taetkyle moved this from 💭 Help-Needed to 🛠️ In-Progress in 🚗 Hail Bopp Mar 4, 2025
@taetkyle
Copy link

taetkyle commented Mar 4, 2025

03/03/2025 Update:

  • Status: Made Python file that allows video from the four different Zed cameras to be shown simultaneously.

  • Next Steps: First, search whether the stereolabs are providing a Python file or a ROS node about bird view. Second, if there is no such function, figure out the way to stitch the four different cameras in a straight line. Third, see whether it is possible to utilize the data that I got from Zed360 to stitch more efficiently. Fourth, make the straight stitched view into a 360 view.

  • Projected Completion: Mar 9nd

  • Update: I checked the adapter is working perfectly, and the place where ROS2 is located in the Jetson. I figured out that the cameras could be open at the same time through running into the python file.

@taetkyle
Copy link

taetkyle commented Mar 10, 2025

03/10/2025 Update:

  • Status: Run the ros2 functions that the stereolab provides, but encounter the error while running the following command: "ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zed2 resolution:=HD1080"

  • Next Steps: I will try to solve the error, but if it is impossible to solve the error, I need to reinstall or find another way to solve it, and subsequently, run the multi-function from ROS2.

  • Projected Completion: Mar 16th

  • Update: I found the ROS2 functions that will help me to utilize the multi camera simulations, but I got an error while running the following command in the Ubuntu terminal: "ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zed2 resolution:=HD1080". Among the errors, I dealt with the resolution error by changing the variables at the back of the command, but it returned the same message, so I need to analyze the whole error message.

Image
Image
Image

@taetkyle
Copy link

03/24/2025 Update:

  • Status: FInally succeeded to run zed_multi_camera function.

  • Next Steps: Since I am done with running the zed_multi_camera function to get videos from the zed camera, I will stitch those images into a single file by utilizing the topics that are generated by running the zed_mutli_camera function. OpenCV will be utilized to stitch the video.

  • Projected Completion: Mar 30nd

  • Update: To run the zed_mutli_camera function, I figured out how to get all the information about the camera. After the preparation of the information, I used the zed_ros2_example in the Navigator_Orin, not the zed_ros2_wrapper. When I just tried to run the function only by using the command at the bottom, I was not able to get useful results, due to lacking computation power. Therefore, I tried to turn off all the other functions, such as depth, only without video by generating yaml file and changing it. Also, I changed the power mode to MAX, so that I can use more power for computation. Finally, I succeeded in running the function and showing how it is working by using the command at the bottom.

Command to use the function for four cameras: ros2 launch zed_multi_camera zed_multi_camera.launch.py
config_file:=/home/nova/Desktop/Navigator_Orin/src/zed-ros2-examples/tutorials/zed_multi_camera/config/multi_cameras.yaml
cam_names:="[zed_front, zed_left, zed_right, zed_rear]"
cam_models:="[zedx, zedx, zedx, zedx]"
cam_serials:="[49004271, 43765493, 46108623, 47860268]"

Command for result: ros2 run rqt_image_view rqt_image_view

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 🛠️ In-Progress
Development

No branches or pull requests

3 participants