Full screen / mobile controls not covering teleprompter view

Is it possible to return the background transparency instead of the grey background behind the bottom full screen controls without a performance hit?

There would be a small performance hit, as having it like that reduces the number of drawing calls used to compose layers together. The hit isn’t too big. The full reasons why it is like this now are very long and complicated to be worth explaining. But I can set it back as it was in a future update.

By the way, can get more vertical space by going to Other Settings and enabling either Always hide formatting tools or Auto hide formatting tools. The first one is best for low resolutions. The second one is there if you wish to have visual access to formatting tools while in edit mode, but it results in the editor toolbar having different heights for different prompter states.

Then I vote leave it as-is. Pis need all the performance optimization they can get. :wink:

And thank you. I will try that out.

1 Like

Is it possible to make the height of the bar shorter with smaller icons?

Can the bar color be changed (preferably to Black)?

I unfortunately don’t have control over its size, as that is also managed by the Kirigami framework.

The background can be changed and made dynamic. It could become black outside of edit mode while in fullscreen and stay gray for edit mode and in windowed mode.

Understood.

How about an auto-hide function unless you mouse over the bottom of the screen when in Prompt mode, like the OS taskbar?

The main window is built using Kirigami’s API to create main windows, and because of that there’s no way to completely hide action buttons from it. Either you get a bar at the top or the buttons at the bottom. Other windows, such as the ones from screen projections, don’t have that limitation to them because I’m using Qt’s API to build them. Qt’'s API builds windows using the native API’s from the OS in use, and it’s own QML Engine APIs, which prepares graphics for the underlying graphics interface, which is OpenGL in the case of all current versions of QPrompt.

Kirigami’s main window is built on top of Qt’s. In the same way, we could derive our own custom kind of window built on top of Kirigami’s, and re-program its behavior; but I try to stay away from solutions like these because they drastically increase maintenance costs due to things breaking when the upstream project makes changes.

Once QPrompt’s editor and prompter are decoupled into distinct windows, the prompter would be using a window made from Qt’s API, so any buttons would be fully created and managed by us, meaning we would be able to hide them in full screen or at all times.

Going back to optimizations, with the update to Qt 6, OpenGL would be replaced with DirectX on Windows, Metal on MacOS, and Vulkan on Linux. This will improve performance on Windows and Mac, but it may reduce it on Linux. If this were to be the case, I would revert to OpenGL for Linux in particular. Vulkan support should continue to improve on Linux, so it’s a matter of trying it out in low end systems until the switch is worthwhile.

For up to date information on how the transition to Qt 6 is going for KDE Frameworks, we can attend this talk: Akademy 2022 (1-7 October 2022): KDE Frameworks 6 - Plans and Progress · Indico

Going back to the first topic. The hit for having controls cover the prompter viewport is small. If it’s too obtrusive, I can revert it.

Is it possible to include an option to toggle from Edit to a ‘clean’ Prompter window (only scrolling text and cursor/timer decorations) via hotkeys, where all menus and buttons would only be available in the Edit mode?

I’m a bit lost with what you mean by “toggle from Edit to a Prompter” window. This sounds like pressing pressing Alt+Tab or Command+Tab to switch between windows, which is enabled by the OS.

A decoupled prompter window would be clean as you describe, any visual controls should be optional, and by default limited to configuring the prompter’s orientation, similar to how projections can be updated in v1.1.

Global hotkeys should be able to control the prompter regardless of what window is in focus, including windows from other software; this would apply both to the current editor/prompter, as well as a decoupled prompter on a separate window.

Basically a hotkey combination that would cycle from this:

… to this, and back:

I see. In order to achieve that, the clean prompter would have to be contained on a window of its own. Therefore the editor would need to be able to work separated from the prompter anyway.

I understand. And if I recall you said the Imaginary 2 monitor approach was not economically feasible at this time?

It can be done, the problem is that the approach I used to keep instances synchronized in Imaginary Teleprompter is flawed and bound to loose that synchronization over time unless re-synced every now and then, which lead to jitter.

Copying the prompter’s output image, like done in QPrompt, is the only approach I know that can guarantee instances are kept in sync to 1 frame of delay. The problem with QPrompt is that copying is performed through the CPU because of a limitation with QML I didn’t foreshadow.

QPrompt’s approach to synchronization was in fact originally prototyped on Imaginary Teleprompter, but it was unusable, even on high end hardware, due to a bottleneck with Electron that lead to memory leaks, animation halting, and eventually Imaginary Teleprompter crashing.

The approach works on QPrompt because only the copying step is done on the CPU, but even that results in a bottleneck. Eliminating the CPU from the equation is the only way to ensure prompter content can be duplicated or transmitted over the network with great performance. Since NDI support and other means of video transfer have been requested multiple times, copying frames continues to be the right approach, we only need to find a more performant way of achieving this, one that, ideally, doesn’t rely on the CPU.

I fear this solution may demand too much from the current Raspberry Pi’s GPU. (May, I don’t really know for sure.) But it’s still the right way to go, as Imaginary’s architecture is difficult to maintain (most of its teleprompter source code is dedicated to keeping instances in sync; here lies the economic feasibility you referenced) and doesn’t allow for transmitting the prompter over the network.

Another company {name redacted} uses a two-computer approach, one being the editor, the other solely a dedicated animation engine. Would something like that work with networked computers?

Something like that could be achieved with QPrompt with the remote control and REST API features that I plan on adding. One instance of QPrompt could be used as the editor and remote control device, while another one is dedicated to prompting.

In fact something like that could be achieved today through a shared network drive from which both instances of QPrompt have the same file open. The only difference is there’s no way to toggle QPrompt to start and restart remotely, so that would call for hacky workarounds.

The challenge lies in how many display outputs would the prompter computer be rendering. If it’s only a single display that get’s mirrored by the OS, there’s will not be a strong performance hit if all screens are controlled by the same video card. But if screens are duplicated as it’s done in QPrompt, one would still have to solve the CPU-copying issue.

I was hoping the render engine computer (always in Prompt mode with dedicated controls) could either be: updated dynamically with content which it would render, or the Edit computer would render an animation and swap it with the current one n the Prompt computer… but as you said, would probably be a cludged approach… I wonder if an old game console could handle it?

Qt’s QML is build for efficiency which reduces power consumption on mobile devices and makes it great for low cost embedded devices. An render engine similar to the ones used for video games would tax a device like a Raspberry Pi much more because it needs to render images over and over for every Hertz of your display, but those kinds of engines have some advantages over QML.

One of QML’s limitations is it slows down while dropping frames, leading to things like the countdown taking longer to reach 0. The current countdown is done on a canvas which is drawn using the CPU, which is ineficient. I plan to increase its performance by re-creating the countdown animation on a GPU shader, or in QML. The performance increase should eliminate the need to drop frames, but it still wouldn’t prevent QPrompt from slowing down if the computer can’t process things fast enough.

A dedicated renderer that’s focused in accuracy wouldn’t be subject to slow downs, it would just drop frames in those circumstances. I’ve actually been thinking about this for some time, but I haven’t found a way to target DirectX, Metal, and Vulkan simultaneously with reasonable development costs. I thought about using SDL2 and FreeType 2 to complish this, but Metal support for SDL seems to be lacking.

Developing a dedicated engine also means implementing the algorithm that would populate formatted text glyphs onto the frame, which should perform efficiently, even dealing with all the glyph in Chinese text, and the complexity of glyph combinations that cannot be predicted for some languages.

So… To answer the question of the consoles… :grin: I think a PS3 and its contemporaries could handle this very well. A PS2 could too but you’d start running into RAM limitations. For anything older you’d have to limit the character set. For an SNES, Genesis, and NES one would have to resort to pallet swapping and scanline manipulation tricks to get all the characters on the screen at the same time; those would all be limited in their available font sizes, and the NES would probably have to be letterboxed to work. An Atari simply cannot do it, unless you do all the processing somewhere else and use the Atari just as a receiver.

If I had all the time in the world, and there weren’t any other things for me to work on, I would write these toy teleprompters for fun. Hopefully I never do most of these, there are better things to do!

I’m assuming QPrompt would’ve been ported to Apple Silicon, iOS, iPad OS, WASM (the browser), FreeBSD, and Haiku OS by that point.

Love the video!!!

Btw, If you want to tinker with an old proprietary prompter that ran in Dos/DosBox, I still have it --(somewhere) lol.

1 Like