ThinkGeo.com    |     Documentation    |     Premium Support

Tap on feature to display popup of info on feature

Afternoon forum and thank you in advance for any input you have.

Xamarin Forms solution and looking to migrate from GoogleMaps SDK to ThinkGeo.
The move is to give the users a much more powerful mapping experience which I believe ThinkGeo certainly offers but there are some things that GoogleMaps SDK does very well and thats the subject of this question.

Using GoogleMaps when the user taps on a feature the feature tapped on is passed to your tapped event.
A very handy thing … to be given the feature tapped.
I can then display a popup with some summary info on the feature.

It appears ThinkGeo does not support this in such a simple and direct way. Please correct me if I am wrong.
Based on the HowDoISamples:
/Samples/UsingOverlays/UsingPopupsSample.xaml.cs to use Popups and
/Samples/UsingQueryTools/GetFeaturesWithinDistanceSample.xaml.cs to find what features are near to/within the tapped position …

Is this the correct/best/preferred way to find the feature under where the user tapped?

If so I have some questions to clarify how to do this best.
What I currently have is the following:

  1. I subscribe to the MapSingleTap event on the MapView.
  2. Using that PointShape I get from the event I use the FeatureLayer.QueryTools.GetFeaturesWithinDistanceOf method to obtain a list of features near to the PointShape (that is where the user tapped).
  3. Based on the potential list of Features returned I assume the first is the closest and I create and show a popup based on that feature.

Is this the best/preferred way to do this?

If it is then I have some further questions?

  1. The GetFeaturesWithinDistanceOf method returns a list of features. What is the order of that returned set? Doco does not provide that detail. Can I assume the first is the closest? How do I get the closest to the PointShape from that set?
  2. From a user experience and thinking of a fat fingered user … should I handle the fact the user is possibly tapping on the image/icon of the feature which is quite possibly offset to the actual location. Any ideas or recommendations here?
  3. What happens here if I have a ClusterPointStyle? Do I simply get all the features that make up the Cluster or do I get some form of Clustered representation of those features clustered? As in the features in the returned list somehow identify the cluster they are in?

I expect to have quite a few feature layers (possibly 5) as a logical grouping of my features (possibly have in the hundreds of features). I expect to have this many layers to assist with handling the volume of features from a style view and to allow users to hide based on the layer.
Using the GetFeaturesWithinDistanceOf which is a feature layer method I could

  1. Simply iterate over all my layers running the query and building a list of possible features OR
  2. Keep a single InMemory feature layer containing all the features yet this one is hidden from view and I query that.

What would be the more costly of the two? Iterate and open/close for each layer OR the overhead of a single hidden layer that duplicates the features?

I hope for and look forward to any help/suggestions you have to offer.

Cheers
Chris …

Morning.

I have done some testing and I think I can answer some of my own questions and I of course have a few more questions.

The GetFeaturesWithinDistanceOf method on a FeatureLayer does not apply any useful/specific order to the returned set of features. For example the set could order by the distance they are from the targetShape passed in. If there is any consistent order my guess is it is based on the order that the features were originally added to the source layer.

As my app target is for devices I need to allow for fat fingers so I will use the GetFeaturesWithinDistanceOf method with an acceptable “within” distance to make a users life easy.

Using this method I will iterate over all my layers, query and add the result set of features to a temp InMemoryFeature layer.

Then query that temp layer with the GetFeaturesNearestTo with a maxItemsToFind of 1 and it appears I am getting the expected result of the nearest feature to the tapped point within an acceptable margin of fat finger use.

Is there a better or preferred way to handle this tap on feature and show popup with summary info?

Now onto my next discovery … this works fine when you have applied a simply style as we have a 1 to 1 between the style seen on the screen and a feature underneath that style symbol.
But if you have a ClusterPointStyle I have an issue. Using my technique of looking for features near the tap it is not aware of what features are potentially clustered.

If the user tapped on a cluster point I need to know that so I can do something other than showing a popup. Possibly give the user the option to go to another page to show the details of all the features within that cluster.

Can I know what features are in a cluster?

Maybe I am going about this the wrong way and I should be looking for what style the user tapped on and finding what feature/s are under that style?

Cheers
Chris …

What I currently have is the following:

  1. I subscribe to the MapSingleTap event on the MapView.
  2. Using that PointShape I get from the event I use the FeatureLayer.QueryTools.GetFeaturesWithinDistanceOf method to obtain a list of features near to the PointShape (that is where the user tapped).
  3. Based on the potential list of Features returned I assume the first is the closest and I create and show a popup based on that feature.

Is this the best/preferred way to do this?

This is the most straightforward way to do this except that you will want to use FeatureLayer.QueryTools.GetFeaturesNearestTo which orders the list of features returned by distance.

  1. The GetFeaturesWithinDistanceOf method returns a list of features. What is the order of that returned set? Doco does not provide that detail. Can I assume the first is the closest? How do I get the closest to the PointShape from that set?

GetFeaturesWithinDistanceOf returns the list of features in any order. You’ll want to use GetFeaturesNearestTo instead. Index 0 would be the closest feature in that method.

  1. From a user experience and thinking of a fat fingered user … should I handle the fact the user is possibly tapping on the image/icon of the feature which is quite possibly offset to the actual location. Any ideas or recommendations here?

In the case of offset style icons, I’m not sure, but there might be a way to offset that search based on the offset of the styles itself in the MapTouch event. I’d have to look further into how to do that though.

  1. What happens here if I have a ClusterPointStyle? Do I simply get all the features that make up the Cluster or do I get some form of Clustered representation of those features clustered? As in the features in the returned list somehow identify the cluster they are in?

Querying for features is separate from what is actually represented on the map. So, in the case of cluster point styles, you would end up just getting the closest single feature from the underlying data. There might be a way to subclass the ClusterPointStyle class and somehow interrogate the cluster grid the user taps on to see what features it would cluster together.

I expect to have quite a few feature layers (possibly 5) as a logical grouping of my features (possibly have in the hundreds of features). I expect to have this many layers to assist with handling the volume of features from a style view and to allow users to hide based on the layer.
Using the GetFeaturesWithinDistanceOf which is a feature layer method I could

  1. Simply iterate over all my layers running the query and building a list of possible features OR
  2. Keep a single InMemory feature layer containing all the features yet this one is hidden from view and I query that.

What would be the more costly of the two? Iterate and open/close for each layer OR the overhead of a single hidden layer that duplicates the features?

It depends on how complex your data is. If the five layers are just point data that are basically POIs for the user to inspect or if the data is retrieved through a database that would take time to query, then solution #2 might be most efficient with the downside of upfront memory footprint of all the data. If your data is local, large polygon data, and easy to query (like shapefiles) then solution #1 would be best because you don’t want complex geometries just sitting in memory all the time.

Good Morning.

I would like to push further into this topic. I could be asking for a feature request, further information or some knowledgeable advice on how to go forward. Maybe even a little of all three.

When the user taps on in the UI Map Surface the event MapSingleTap is fired.
This gives you the sender which is the map control and TouchMapViewEventArgs which contains the map coordinates of where the user tapped.

As you can see from my comments above using the HowDoISamples and using a mix of GetDataFromFeatureSample and GetFeaturesWithinDistanceSample I can get something working.
The issue is this a fairly simple scenario and in reality the UI the user is seeing does not always have a perfect 1 to 1 match to the underlying feature layers.
What do I mean by that? As soon as you starting using the Custom Styles you are changing the relationship between the features and what is displayed. That is their design I think.

So lets take the ClusterPointStyle. It’s job is to take many features and roll them up into a single point on the UI. Using the technique above I cannot get accurate information. The point where the ClusterPointStyle is has no features under it … that is by design as the ClusterPointStyle will move to a central position in relation to the features it is clustering.

What I think would be a great solution is for the MapSingleTap event to additionally return the Custom Style and a collection of the features under the tap point.

Is this possible?

Is this a feature you would want to be into the product … an enhancement request?

Is this something I can implement by extending the classes involved? Are you able to provide some direction, advice and samples on this?

Looking forward to your assistance.
Regards
Chris …

Good morning.

Any takers on this topic?
It is a fairly important concept to a mobile app using a map.
Mobile users pan and zoom in/out a lot and expect the data on screen to quickly shift from a summarized view to a detail view.
They also expect to tap and see detail on features.

This is something that the GMaps SDK does very well.

Does ThinkGeo do this already and I am missing where that is?
Is it very easy to extend ThinkGeo to do this?
Or am I simply out of luck in that space?

Hoping someone can give me some direction.

Thanks
Chris …