I have a shapefile with about 30, 000 records representing points of interests. There are 50 different types and I am using a ValueStyle with a different icon for each type. I have an icon for bank, grocery, car rental, gas station etc. Using a ValueStyle with 50 different types on a shapefile with 30,000 records has some preformance issues. What would be your suggestions to increase performance in that scenario?
Displaying points of interest with large number of types
Adolfo,
The more records contains in the shape file, the slower the performance is; especially for the some special styles such as ClassBreakStyle or ValueStyle. I have an idea to enhance your performance much, but I think it’s a little complicate.
To split the shape file is the best solution; what I mean is that create a shape file for every value style. For example, you have 50 different types; so we’ll finally create 50 shape files. Or you can group your 50 types to several groups; split it by group type. It makes your shape files smaller and the condition become less.
I’m sure you’ll get higher performance with this solution. On the other hand, if your shape files are stabled, server cache is another solution. Please have a try and let me know your feedback.
Thanks,
Howard
Guys,
I wanted to clarify something that Howard said. It is not that a larger shape file means slower performance it is that whenever you have a shape file where you want to render based on a column value it slows down. The reason is that when we do the spatial query and find all the points in the rectangle of your view we then have to go to the DBF and get column information there as well. This extra reading from the disk cost performance. If you have 10 different points of interest types then we suggest, for maximum speed, that you split your shape file into ten different ones, each for one type. In this way we do not need to read the point type from the dbf we can just know from your layer setup. If you have a high point density on the screen this can really speed things up.
One other thing is that a user once commented that the way we evaluate our value and class break renderer may not be as optimized as it could be. While I think the main slow down is the reading of the disk there could be some gains to writing your own value style. For example if you know that 90% of the records are of one type then it would be better to check that type first. You could do that with ours by adding that value first in the value sytle’s value collection etc. In this way we check the most common case first.
David