If I restrict the Raspberry Pi 4b to a single hdmi channel, will that affect/improve animation performance?
If your current setup uses an extended display, limiting the RPi’s display configuration to a single display would definitively improve performance.
Setting the RPi to mirror the same output onto both HDMI should also achieve the same result because the resolution to process would require half the bandwidth than “extended”, and is re-used for the second display.
Nevertheless, for best results across all computers you should use a video splitter that takes one HDMI signal and outputs two or more identical signals to its HDMI outputs. This approach may add a small consistent lag to the response time, but lifts work away from the source computer, reducing jitter and eliminating potential v-sync issues present when using an extended display.
I made a breakthrough earlier today while attempting to improve the performance of screen projections.
This video is from an uncommitted prototype build. It was worked on and recorded on the M1 Mac, using the recording software that it brings, while running on battery.
On this prototype, the viewport’s contents are being rendered to a separate, off screen, layer in GPU memory prior to being drawn on either window. Each time a frame is rendered to the main window, the second window is requested an update and proceeds to draw the prompter’s contents from the off screen layer. This improves performance considerably, because the image no longer has to be copied from one window to another through the CPU. This also eliminates our previous requirement to separate the editor from the prompter to improve the projections’ performance, since we don’t need to intercept the window’s drawing calls to grab its contents.
I’ve yet to test how this new approach works on Windows and on the Raspberry Pi. I expect it to further lower projection related crashes on Windows, and improve performance on the Raspberry Pi up to an undetermined resolution. Higher resolutions should also improve in performance, but it will only be usable up to a certain, yet to be determined, resolution because of the limited amount of video memory.
Although I haven’t experienced any screen tearing, my biggest fear is this change could introduce screen tearing on systems with low spec CPUs. Last but not least, because everything is done through QML, I’m not yet able to take the feed and use it as a source for NDI. This may still require separating the editor from the prompter, though I doubt it’ll come to that because the Qt framework is very flexible. The more we can re-use the off screen texture for different purposes, the better for overall performance.
This is frustrating… The solution only work on MacOS. There are some protections in the Qt framework, that apply to the other systems, that forbid me from using the contents from the GPU in this same way on Linux and Windows. I need to continue looking for a solution that works on all platforms or on these two platforms.
Fortunately, having seen it work, it is clear that keeping visuals on the GPU is the right path to take. As long as all displays are controlled by the same GPU, this approach will work.
What approaches do gpu-intensive games/simulations/3d-rendering on Windows or RPi take?
The theory is the same for all systems. Be it for 2D or 3D one must reduce the number of calls to the GPU and have any graphics intensive processes happen exclusively on the GPU with little or no CPU intervention.
The last action almost doesn’t apply, but to reduce calls to the GPU, the QML Engine makes use of a scene graph that describes all visible items. This graph is traversed node by node from the frontmost element to the furthest element. As the graph is traversed, GPU drawing instructions are combined into as few calls to the GPU as possible. Layers of solids and transparent items that do not occlude each other are to be drawn together and elements with transparent areas are set to be drawn on the next call until the whole graph is traversed. Reducing the amount of transparencies with solids and illusions of transparency are techniques I’m using to improve QPrompt’s performance. There are a few more places where this kind of change could be made but this is not the area where a performance boost is needed.
Where QPrompt falls short is when prompting screen contents on more than one window. The QML Application Engine class used in QPrompt is tightly integrated with the window’s update mechanism and, because drawings are time sensitive, any accesses to GPU data must be done from the window’s own rendering thread, otherwise we risks screen tearing and other forms of visual data corruption. Thanks to Qt’s modular nature, the QML Engine can be decoupled from the window, but this is more difficult to achieve here because Kirigami uses a customized abstraction of QML’s Application Window that would also have to be replaced, increasing maintenance costs and somewhat limiting my ability to sub-license QPrompt for commercial agreements.
To ensure the right contents are shown at the screen at all times, any accesses to the textures of QML Items must occur from the same CPU rendering thread used by the window that contains said item; which may require implementing custom windows with a shared or synchronized rendering thread, and these windows must still be able to communicate between each other, and not break Kirigami’s inner workings.
Because we’re in the middle of a technological transition between Qt 5 and 6 matters are a little more difficult. But the truth is that, despite of all this, Qt continues to be the best framework to develop high performance, cross platform software with tight system integration. If the QML engine won’t allow us to render the same contents on two distinct windows, we could still develop our own renderer in “OpenGL” or “Vulkan, DirectX and Metal”, and incorporate its outputs while the windows are drawn. I personally hope we don’t need to go there, because recreating the engine feels insurmountable to do on my own, and I’d like to also work in other projects and areas of QPrompt. Nevertheless, because of Qt’s modularity, I’m confident there’s another way around just waiting to be found.
As you can see, this goes beyond what is used to optimize most 3D software, since 3D software usually renders to a single window, despite some rendering multiple views at the same time. Nonetheless, thank you for asking because it got me thinking about games, and I think I’ve figured out a way to transpose an optimization technique I thought about using in one of the toy teleprompters into QPrompt to reduce GPU load a little.
If I use a fragment shader to shift the colors of the viewport and simulate an overlay instead of actually compositing the layers of an overlay, maybe we could gain some performance, assuming the overhead to adding this shader isn’t higher than the cost of compositing the overlay through the scene graph. This would be a modern day equivalent to shifting color palettes during scan line interrupts, which was the technique used to simulate water in Sonic The Hedgehog back when it was unfeasible to compute transparencies.
Whenever you decide to write books or teach courses, please let me know!
Then I’ll let you know here when I start my first “proper” YouTube channel. My plan is to teach about how Linux apps are made and how people could and should approach making them, based on their goals and prior experiences.
I found a way for the approach that worked on Mac to work on Linux.
Leaving these docs here for personal use:
If you want any assistance in editing the content for your channel, let me know.
Thanks, I’ll let you know when I need some help. For now I plan on doing the editing myself so I can help develop Olive Video Editor or Kdenlive. To me, Olive seems to have the potential to become a serious editor for professionals, and Kdenlive is the best open source editor that we have today on Linux (but its UI is inconsistent, redundant, and crashes when you get serious).
If I can’t deal with either, I’ll get Premiere or fallback to Final Cut since I own a license. Back in 2016 I used to know Premiere and AE like I know my keyboards. Here’s the second best thing I’ve edited, this was recorded in 2012 : ¡No al Veneno! : Cine C.A.R.E.T.A.S. Inc. : Free Download, Borrow, and Streaming : Internet Archive