Hi All,
I am currently wanted to provide a wms for many large flexible mesh (ie > 500,000 polygon) and I would like find out what would be the best approach so that the map can be rendered fast while the memory usage is still managable. These polygon sizes varies vastly from being very dense to very sparse at some areas. But we still would like to show the data as geometrically as accurate as possible. I have testing creating a custom plugin using InMemoryFeature and store a feature from a multipolygon of > 200,000 polygon and it takes a long time (>1-2mins) to create the feature and then more than 10-20 sec to generate the bitmap for every zoom in/out and the memory usage is huge >400MB. if I have a few of such datasets to be display on the web, I will have out of memory in no time.
What would be the best way to approach this? I believe spatial grouping of such data to each feature and add them one by one to the InternalFeatures of InMemoryFeature will help to speed thing up a bit, but the algorithm to group the data may take a long time as well. I have also thought of customized FeatureSource class and control rendering speed by overriding GetAllFeaturesInsideBoundingBoxCore() function, but it will not solve the problem if the boundingbox is the full extend of the dataset (if map request to view full extend of the data).
Thus anyone has a good idea in tackling this?
Regards,
Veradej