How to get the total GPU memory size?

Forums: 

When starting our OCC-based application with an older graphics card (Nvidia Quadro FX 1700 with 500 MB GPU memory) a user received an error message that the application requested more GPU memory than is available.
When checking we realized that the main reason for the hight GPU memory consumption is that we activated 8x MSAA (anti-aliasing).
So, we had the idea to check the total amount of GPU memory and if it is low then we would limit MSAA to 4x or switch it off. A low total amount of memory indicates that it is an older graphics card.

OpenGl_Context provides a method for AvailableMemory() but unfortunately not for the total memory. MemoryInfo() returns a string with total memory for NVidia but not for ATI.

Here is a description of how to retrieve the total memory for ATI also:
http://nasutechtips.blogspot.de/2011/02/how-to-get-gpu-memory-size-and-u...

Would it be good to add a function to OpenGL_Context returning the total gpu memory. Or do you have other thoughts on this matter?

Kirill Gavrilov's picture

Having GPU memory info is not very useful per se - there are many aspects affecting memory usage like window composer, system needs, other running applications, driver/system memory manager, availability of dedicated GPU memory and not (integrated graphics / fusion of several GPUs).

Moreover, enabling MSAA8 on 4K display is not only expensive (and easy to go out of video memory even when you have 2 GiB videocard - because framebuffers are usually require continuous blocks of memory) but also not so useful (HiDPI will compensate aliasing effects to some degree).

Here is a description of how to retrieve the total memory for ATI also

OCCT already implements fetching memory info through GL_ATI_meminfo extension, but AMD has abandoned this extension some time ago from Catalyst drivers for unknown reasons.

Also, Intel drivers support neither of these extensions (although Intel Graphics has no dedicated GPU memory anyways, but the absence of this information would not make application choice simpler).

Timo Roth's picture

So, you would recommend to generally use only MSAA4?

Would it be possible to handle such exceptions? Then we could try MSAA8 and if it fails then switch to MSAA4. But it seems the exception message box is directly created by the graphics driver. So, probably it cannot be handled.

We tested the memory consumption with the tool GPU-Z:
https://www.techpowerup.com/download/techpowerup-gpu-z/
On the sensors tab you can see the dedicated memory usage.

On my system (Win10 64bit, ATI FirePro V5800) the OCC view consumed approximately
- 35 Bytes/pixel without MSAA
- 65 with 2x
- 137 with 4x
- 245 with 8x
We just startet Draw (OCCT7.1), opened a view, changed the MSAA settings and watched how the memory changes. Finally, we divided the memory by the viewport size.
To us this memory consumption seems quite much.
Can you reproduce similar values and what is the reason for it?
Could some settings be changed to reduce it?
Or can the OCCT code be made more efficient?
How do other CAD systems handle this? In Siemens NX10 the memory consumption is lower and doesn't change when antialiasing is switched on or off. Do they use other techniques?

Kirill Gavrilov's picture

MSAA2 has almost zero effect on quality (thus, not very useful), MSAA4 is usually fine and MSAA8 can be considered great but very expensive due to high memory usage (and it is very slow on Intel graphics). Not every user would like to see MSAA at all, so option disabling it would be appreciated by user.

I would suggest to set MSAA settings by default only when you sure the hardware is capable of it - but determining this might be quite complicated. Some games performs small benchmarks to determine optimal settings using synthetic data.

So the application that would like to avoid issues will use smaller values by default and allow user to switch options by himself (through predefined profiles or/and advanced settings) and will provide launch options to start application in safe mode (if user has accidentally selected bad settings or changed computer configuration).

Or can the OCCT code be made more efficient?

Application has control over OCCT options affecting GPU memory usage, such as using additional FBO for immediate layer or not.
But changing defaults have own side effects.

Apart from the driver error you have observed (actually I have not seen such error, and I don't think it can be handled), there are more issues which might occur:
- FBO allocation might fail in regular way (e.g. handled by OCCT) leading to OCCT visualization not working in proper way (e.g. without proper optimizations / off-screen effects).
- FBO allocation might succeed, but result in slow rendering performance due to insufficient GPU memory. I see this effect randomly on my Radeon.
- FBO might take a lot of memory, so that allocation of GPU memory for VBOs for geometry will fail - resulting in bad rendering performance.
- FBO allocation might fail dynamically while resizing the window.

On the sensors tab you can see the dedicated memory usage.

You should be careful while analyzing output of such tools, because GPU memory management is quite complicated on modern systems.

In Siemens NX10 the memory consumption is lower and doesn't change when antialiasing is switched on or off. Do they use other techniques?

I don't know what is done in referred system, but MSAA is not the only option for achieving antialiazing effect.
The MSAA can be enabled for window buffer instead of offscreen buffers FBO (OCCT does not allocate MSAA for window buffer, but user might force MSAA in driver settings which would not improve quality but will increase memory usage), the antializing can be done by super-sampling, by using deprecated OpenGL functionality (removed within last OCCT releases) and it can be also done with post-processing algorithms.

FXAA (Fast Approximate Anti-Aliasing) is one of the popular cheap anti-aliasing algorithms which works as post-processing filter on rendering image of usual size (thus does not increase memory consumption) - OCCT implements similar algorithm for RayTracing renderer (Graphic3d_RenderingParams::IsAntialiasingEnabled option), but not for conventional renderer.

FXAA is computationally cheap (relative to MSAA) but it produces blurry image, which is especially noticeable on small text. There are alternatives like Conservative Morphological Anti Aliasing intended to solve blurry effects of FXAA by more sophisticated local blurring (which is, of course, more computationally intensive):
https://software.intel.com/en-us/blogs/2013/10/10/conservative-morpholog...

Kirill Gavrilov's picture

The path for #0028466 adds support of WGL_AMD_gpu_association extension for fetching total GPU memory (AMD drivers for Radeon).