For all issues regarding the Forums use, please, refer to the Forum Rules.

Our Solutions

Need professional assistance?
Consider our:

Support Offerings

 

Need to speed up your development?
Have a look at our:

Samples & Tools

 

Need some functionality extending standard OCCT capabilities?
Check out our:

Adv. Components

Related pages

6.5 mesh regressions

QbProg's picture
Forums: 

Hello,
using version 6.5, tessellating a BSplineSurface Face with the same precision of 6.3(not relative) , outputs many more triangles (i.e 300 vs. 20000) and it takes much longer (probably due the increased number of triangles).
I'm using BRepTools::Mesh with a non-relative tessellation precision (0.5)

Something changed in how the precision is handled?

Jerome Robert's picture

Hi Thomas,

In the case of GeomAbs_BSplineSurface surface, BRepMesh_FastDiscretFace::InternalVertices create too much vertices. It happens with relative and not relative deflection. Try this patch, it works for me.

Regards,

Jerome

Attachments: 
Pawel's picture

Hi Thomas,

I observed something similar.

However, the Release Notes:

"The BRepMesh triangulation algorithm has been seriously revised and now tries hard to fulfill the requested deflection and angular tolerance parameters. If you experience any problems with performance or triangulation quality (in particular, display of shapes in shading mode), consider revising the values of these parameters used in your application."

give a hint what the reason for such behaviour might be.

Pawel

Jerome Robert's picture

Hi Pawel,

The density of triangle is mush more (50 times) important when BRepMesh_FastDiscretFace::InternalVertices is called on GeomAbs_BSplineSurface. It's really a bug.

On my side, in any other case the mesher gives expected results. Moreover relative tolerance is less tricky to use than in OCC 6.3.

Jerome

Roman Lygin's picture

Regressions in the mesher was the biggest issue I noticed in 6.4.x which put me off adopting it. Regressions were in performance, number of triangles (and hence memory footprint), reported deflection and sometimes visualization quality.
I have reported this to the OCC team but did not receive a feedback.

Some test models which revealed serious regressions in 6.4.2 behave better in 6.5.0 in terms of mesh quality but # of triangles and timing is still worse. I observed models with # of triangles grown up to 80x with 370x slow down in 6.4.x vs 6.3.1. However with 6.5.0 timing seemed to be returned to acceptable levels (e.g. the whole CAD Exchanger test suite only takes about 10% longer than 6.3.1). However # of triangles is still high in 6.5.0. This concerns me most of all, as for 32bit apps imposed memory footprint can be significant to prevent working with larger models.

As Pawel mentioned, there is a recommendation in Release Notes to check the requested deflection. However the observation above relate to 'default' value (e.g. calculated in StdPrs), so it should rather have been tweaked there than in user's code.

It would be interesting to hear OCC folks comments here.
Roman

SandAnar's picture

Roman,
1. In fact, as you really know fixes to improve quality (there are a lot of fixes in 6.5 to solve customer problems) leads to additional efforts of triangulation algorithm and produces much more nubmer of triangles than early. Yes, algorithm become slower on BSpline surfaces, but OCC team works to solve this problem too.
2. Can I ask you, where I can download Open CASCADE 6.4.2 to compare with 6.5? As I can see only major versions of OCCT is available for download.
Best Regards, Pavel.

Roman Lygin's picture

Pavel,

Thank you for the note. I realize there is often a trade-off between quality and performance. Presumably default behavior existed in 6.3.x has been acceptable for *most* users, at least for visualization purposes. With 6.5. in overwhelming majority of cases, there will be no visual improvement of shading but *everyone* will now pay with reduced performance and larger memory footprint. The latter can be especially dangerous for larger models which may now not fit into memory for 32 bit apps.
My point is that if a new meshing algorithm produces many more triangles for the same deflection value comparing to what it used to produce in the past, the new default value of deflection could be adjusted (made coarser) to improve the balance. Customers sensitive to mesh quality and expecting a finer mesh for default value could have kept the old default value. So it's a pay-as-you-go principle.
As for 6.4.x, please check with the OCC team to get access to it.

Best regards,
Roman

QbProg's picture

Using the patch of Jerome fixes the number of triangles, but it is still much slower that 6.3, in particular when tessellating BSpline Surfaces, even when using larger tolerances.

It seems that most of the time is spent in MeshAlgo_CircleInspector::Inspect ...

sergey zaritchny's picture

Hi all,
Thanks for the community for a lively interest taken for the BRepMesh algorithm evolution.
Below you may find some comments and guide lines of our mesh expert for the subject:
1. Open OCC Mesh (BRepMesh) algorithm is tailored for visualization
2. OCC provides plug-in interface for detailed tuning and ExpressMesh as a commercial product in addition
3. We obtain a number bug and improvement requests from our customers that is why the evolution of BRepMesh is performed
4. Some slow down on b-splines using visualization tolerances is possible (according to our base we have performed certain measurements),
but some regression can be occurred on models we have not on hand
5. OCC recommends plug-in architecture instead of patching

Besides we will be pleased to get a description (and data for reproducing) for all problematic cases you met using BRepMesh.
These data will be carefully analyzed and used for further BRepMesh algorithm evolution.
Thanks for any input in advance.
Regards

QbProg's picture

Hi sergey,
I will put some IGES test cases here ASAP next week where performance degraded.

1) Ok, so it should a little bit faster, since waiting half-minute for a render is a bit too much nowadays.
I reassert that using a complete delaunay mesher degrades the performance too much, where many times the triangle-quality is not *so* important for visualization. You can see this from benchmarks. Using a different or simpler approach could increase dramatically the performance.
In the current status one has to decrease precision, but so you get bad surface quality with nice looking triangles.
2) I'm using directly Brep_IncrementalMesh, so I don't think I could use a plugin interface for that, can I?

Just to let you know, the worst regressions come from BSpline surfaces. It seems to generate a lot more of triangles, even if the surface is small. For example, scaling the surface of 20 times, it increases the number of triangles only by 50%.
Also, should I suggest that for future drastic modifications like this one , you could enable an opt-in option (i.e. a boolean) so the users can choose the implementation to use (something like you did with the boolean operations).

Thank you for the indications,
Regards,
Thomas

sergey zaritchny's picture

Hi Thomas,
Thanks for your suggestions.
The mentioned problems of meshing algorithm are registered.
Thanks to Roman Lygin for sent data to reproduce the problems.
Our experts are analyzing the presented cases. We will try to fix
it in the nearest public release.
All additional test cases are welcome.
Regards

Le TEXIER Paul's picture

Hi Thomas,
and others ...

Don't you think it should be time for OCC to use newer technology?
For example:
* OpenGL 4.0 GPU Tessellation
http://www.geeks3d.com/20100730/test-first-contact-with-opengl-4-0-gpu-tessellation/
* Real-Time GPU Rendering of Piecewise Algebraic Surfaces
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.1464

Sharjith Naramparambath's picture

Does tessellation done for visualization mean the same for engineering analysis and calculation purposes? I am wondering if you can get enough data for calculation purposes (in domains where meshes are used) from tessellations done by the GPU? I think it is altogether a different thing when it comes to utilizing the tessellations for calculation of mesh intersections, collission detections, toolpath generation etc. which cannot be obtained from the tessellations that the GPU generates for visualization purposes.

QbProg's picture

Hi,
I don't think it's still time to go on GPU. Surely there could be a different algorithm for visualization and for precise calculations.
For visualization delaunay could be used only until a certain precision level, then successive refinements could be done using a faster approach (an edge-split method could be faster ). This would allow to obtain much more finer triangles in much less time, but you could get non-delaunay triangles.

About 6.5, I'll try to use the 6.3 version instead, I'm just using it for visualization, but not using OCC presentation functions, instead direct OpenGL.
Increasing the precision doesn't works either, you get a coarser rapresentation (ugly) with too much triangles anyway.