[001] OpenCV: How to capture video from a PS4 camera
In this blog post I will explain how you can setup a PlayStation4 camera on a Windows machine and how to capture images from it for your personal projects.
- OpenCV: Capturing video from a PS4 camera
- OpenCV: How to calibrate a PS4 camera
- OpenCV: How to obtain disparity maps
Throughout the rest of the tutorial, you will gain experience in using OpenCV and Python to handle image streams, combine stereo images to obtain depth information, running Pose Estimation models to extract skeletons from frames and how to transform 2D skeletons into 3D skeletons for animations.
Setting up the camera
![]() |
The PS4 connector is just a regular USB3 port with with a slightly modified shape |
Obtaining a camera adaptor
Connecting the camera to your PC
Loading camera firmware from code
First we have to identify the index of our camera. If you have only one camera connected, then the index should be 0, otherwise the index is given by the order in which you plugged in the cameras in your PC.
# open camera stream using OpenCV
cap = cv2.VideoCapture(camera_index)
Then we need to read the frames as a continuous stream and display the output.
while True:
# capture frame-by-frame
ret, frame = cap.read()
# show frames until the 'q' key is pressed
cv2.imshow('original', frame)
cv2.destroyAllWindows()
Just adding a few conditions to exit the while loop and the final code should look something like this
# open camera stream using OpenCV
cap = cv2.VideoCapture(camera_index)
while True:
# capture frame-by-frame
ret, frame = cap.read()
# if frame is read correctly ret is True
if not ret:
print("Can't receive frame. Exiting ...")
break
# show frames until the 'q' key is pressed
cv2.imshow('original', frame)
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
![]() |
Now we just have to fix image quality and infinite frames |
Video stream not working
For some reason you will have to run the Windows Camera app the first time you plug in the camera in your PC. In that case you can either solve the issue by launching the app manually, or by calling the following function in your code.
def _adapt_brightness_using_windows():
# Using windows camera predefined camera init functionality
# Warining! There will be exposure difference between times of day
subprocess.run('start microsoft.windows.camera:', shell=True)
time.sleep(4)
# wait for camera brightness to calibrate
subprocess.run('Taskkill /IMWindowsCamera.exe /F', shell=True)
time.sleep(1)
Fixing camera quality
I’ve played a bit with some values and saw that the numbers which fit best the resolution of the camera are 3448 pixels for width and 808 pixels for the height. Now you will have to add the following lines after you open the camera stream and your image quality should drastically improve.
FRAME_INFO = {
cv2.CAP_PROP_FRAME_WIDTH: 3448,
cv2.CAP_PROP_FRAME_HEIGHT: 808
}
# open camera stream using OpenCV
cap = cv2.VideoCapture(camera_index)
# set actual frame width and height
for key, value in FRAME_INFO.items():
cap.set(key,value)
![]() |
We still have to fix the infinite frame loop |
Fixing camera infinite frame loop
It seems like we only need the first two images from the frame to make due, and also get rid of that ugly greed bar from the left. Again, I played a bit with the numbers, and it seems that after shifting 64 pixels to the right we can then extract two frames of 1264 by 800 pixels using the code below.
def _extract_stereo(frame, x_shift=64, y_shift=0,
width=1264, height=800, frame_shape=None):
frame_r = frame[y_shift : y_shift+height,
x_shift : x_shift+width]
frame_l = frame[y_shift : y_shift+height,
x_shift+width : x_shift + width*2]
if frame_shape:
frame_r = cv2.resize(frame_r, frame_shape)
frame_l = cv2.resize(frame_l, frame_shape)
return frame_r, frame_l
![]() |
Not too shabby if I do say so myself |
Merged code
import cv2
import time
import numpy as np
import subprocess
FRAME_INFO = {
cv2.CAP_PROP_FRAME_WIDTH: 3448,
cv2.CAP_PROP_FRAME_HEIGHT: 808
}
camera_index = 0
def _extract_stereo(frame, x_shift=64, y_shift=0, width=1264, height=800, frame_shape=None):
frame_r = frame[y_shift : y_shift+height,
x_shift : x_shift+width]
frame_l = frame[y_shift : y_shift+height,
x_shift+width : x_shift + width*2]
if frame_shape:
frame_r = cv2.resize(frame_r, frame_shape)
frame_l = cv2.resize(frame_l, frame_shape)
return frame_r, frame_l
def _adapt_brightness_using_windows():
# Using windows camera predefined camera init functionality
# Warining! There will be exposure difference between times of day
subprocess.run('start microsoft.windows.camera:', shell=True)
time.sleep(4) # wait for camera brightness to calibrate
subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True)
time.sleep(1)
# open a process which starts FirmwareLoader
proc = subprocess.Popen('EyeCameraFirmwareLoader.exe',
stdout=subprocess.PIPE)
# get the process status
status = str(proc.stdout.read()).strip()
# check whether the firmware was loaded
was_not_loaded = 'Usb Boot device not found...' in status
if was_not_loaded:
print('You were unable to load the firmware.')
_adapt_brightness_using_windows()
# open camera stream using OpenCV
cap = cv2.VideoCapture(camera_index)
# set actual frame width and height
for key, value in FRAME_INFO.items():
cap.set(key,value)
while True:
# capture frame-by-frame
ret, frame = cap.read()
# if frame is read correctly ret is True
if not ret:
print("Can't receive frame. Exiting ...")
break
# show frames until the 'q' key is pressed
frame_r, frame_l = _extract_stereo(frame)
# cv2.imshow('original', frame)
cv2.imshow('split_frames', np.concatenate([frame_r, frame_l], axis=1))
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
Summary
In today’s post we looked over how we can:
- Connect our PS4 camera to a Windows PC
- Load camera firmware using Python’s subprocess class
- Increase camera quality by setting VideoCapture parameters
- Separate stereo frame into two frames corresponding to the left and right camera
This will help us a lot when developing the algorithm for depth calculation and further on when we will develop the algorithm for extracting 3D skeletons.
If you have any questions leave them in the comments and I’ll try to answer them as soon as I can.
Hope you enjoyed the blog post :)
Peace 🐐
Comments
Post a Comment