Production Scheduling Tips & Tricks for Microsoft Dynamics 365 Business Central

NAV 2015 Tablet Client: VPS Web Client Touch Enhancements

Written by Dieter Temme | Mar 5, 2015 12:22:00 PM

In previous blog entries we have already written about our experiences in developing JavaScript add-ins for Microsoft Dynamics NAV 2013 R2 and 2015 which are capable of being used in the Web Client of NAV as well as in the Windows Client.

With NAV 2015, Microsoft introduced a new app internally called the Tablet Client, which is available for several platforms, namely Microsoft Windows 8.1, Google Android and Apple iOS. This Tablet Client basically is a web browser container hosting NAV pages. The pages can host embedded control add-ins that are written using HTML and JavaScript,  just like the Web Client of NAV. The platforms named above require the user interface content to be prepared for touch gestures.

Two issues are crucial in this context:

  • Look-and-Feel: All interactive elements on the screen have to be large enough and be positioned in sufficient distance to each other so that they can be hit accurately by a finger without causing ambiguities.
  • Interactions: All interactive elements on the screen can allow mouse and touch gestures. The interactions also have to be intuitive for the end user.
In order to enhance the current version of our Visual Production Scheduler (VPS) Web Client for touch devices, we decided to concentrate on the gestures first before adapting the look and feel. Read more in this blog post about touch enhancements in the VPS Web Client. 

Main Challenge

If you look at simple web pages, the web browsers on all platforms simulate mouse gestures such as clicking and scrolling without changing the code of the web pages. But a web application that wants to gain control over all mouse and touch input needs to handle all input by itself. 

 

The Touch Gestures

Here are the basic touch gestures usually expected of a tablet application :

  • Single Tap: This gesture is nearly the same as a mouse left button click but could have different semantics. While there is no touch interaction corresponding to mouse hovering, a tap gesture could be used e.g. to show a tooltip. It is also possible to define multi-touch taps, but currently we do not use such gestures.
  • Double tap: This gesture is nearly the same as a mouse left button double click. But you can only define a double tap (as well as a double click), if the first tap did not already cause changes in the chart in terms of the DOM element being shifted or deleted.
  • Press (sometimes called Hold): This gesture means holding a DOM element down with one finger for some time without moving. The semantics could be to emulate a mouse right button click, but you can use this gesture otherwise if necessary (see below).
  • Pan: This could be the same as a mouse dragging gesture, but one should bear in mind that dragging a DOM element in the foreground has to be distinguished from scrolling the whole chart content while only part of it is shown in the view. That's why it could be necessary to allow dragging an object only after holding it down with the finger for a while (see press gesture). Additionally, the dragged object or the scrolled chart should follow the finger position exactly and without delay.
  • Swipe (sometimes called Flick): This gesture is often used for rapid scrolling of window content with the scrolling briefly going on after the finger was lifted and for decreasing the scrolling velocity. This leaves application developers with the tasks to distinguish this gesture from the pan gesture. The equivalent mouse gesture could be using the mouse wheel.
  • Rotate: This gesture is a two finger touch gesture. There is no equivalent mouse gesture. It is not used in our control add-in at the moment.
  • Pinch: This is also a two finger touch gesture and it is often used for zooming in or out. The equivalent mouse gesture could be the usage of the mouse wheel (possibly only while the CTRL key is pressed as well in order to distinguish it from scrolling). In our control add-in, this gesture is implemented for stretching or shrinking the time resolution when performed on the timescale.

It is also possible to combine touch gestures by using more than one finger or by requiring two gestures in sequence such as ‘Press’ and ‘Pan’.

The possible mouse gestures should be known to most people by now so that we refrain from explaining them here: left/first button click, right/second button click, left button double click, drag, hover, wheel action.

Searching a Library for Handling Touch Events

All web browsers simulate mouse events for taps and for swipes. When it comes to distinguishing mouse from touch gestures in a web application, simulated mouse events are not welcome so that we have to try to switch them off. This requires browser-dependent steps since the World Wide Web Consortium (W3C) has not standardized touch behavior to the end. Unfortunately, the appropriate CSS attribute ‘touch-action’ is not supported by Safari in iOS 8.1 at the moment. In addition, there are two sets of touch events: Internet Explorer implements unified pointer events for mouse and touch input, whereas Chrome and Safari implement pure touch events. Firefox implements both sets, but has disabled touch support since version 25.

We began searching for a JavaScript library which could help in terms of distinguishing between mouse and touch events and could handle the browser-dependent differences itself so that we could focus on implementing the actions for the gestures. The JavaScript community is very large and produces many plug-ins and libraries so that we were sure to find something (e.g. see http://www.queness.com/post/11755/11-multi-touch-and-touch-events-javascript-libraries).

Since the web browsers'  support changes with every version of each browser and the W3C standardization keeps evoling, a library needs to be maintained, if necessary, quickly after the release of new browser versions of Internet Explorer, Chrome, Firefox, and Safari. Moreover, the library should handle all known touch gestures because the browsers themselves only trigger simple touch start, move, end events and hence do not trigger individual events for multi-touch gestures such as pinching and rotating.

We were surprised that only one JavaScript library really met all our criteria: Hammer.js (see http://hammerjs.github.io/) is a library that:

  • is well maintained,
  • is tested with all relevant web browsers on the main platforms,
  • can handle nearly all mouse and touch events (it only lacks support for swiping with momentum, see below),
  • triggers unified, gesture-specific events instead of the browser ones.

So we began implementing our interactions using Hammer.js which can be done in a very direct and easy way, as can be seen in the following small code example:

// define the DOM element, which should get Hammer events

var myElement = document.getElementById('myElement');

// create a Hammer.Manager object

var mc = new Hammer.Manager(myElement);

// create Hammer.Recognizer objects for touch gestures…

var doubleTap = new Hammer.Tap({ event: 'doubletap', taps: 2 });

var press = new Hammer.Press();

// … and add them to the Manager

mc.add([doubleTap, press]);

// attach event listeners

mc.on('doubletap', function (ev) { /*...*/ });

mc.on('press', function (ev) { /*...*/ });

});

The Performance Shock and a Refined Approach

Initially, we did not recognize that our application was getting slower and slower, especially in Internet Explorer while implementation of touch event actions grew and grew. Inside the NAV Windows Client, the Internet Explorer actually is used in the form of a browser control to show the JavaScript control add-ins, so IE was essential for our control add-in: Loading a bigger amount of data took 5 minutes instead of 10 seconds before using Hammer.js. This came as a great shock!

After doing some profiling, we found the culprit: The Internet Explorer has a performance weakness in terms of adding event listeners. Since we added Hammer.Recognizer objects for every table cell and for every operation in the chart and each Hammer.Recognizer itself adds several mouse and touch event listeners to the assigned DOM element a different solution had to be looked for.

We found it in using a refined approach: We now create only a single Hammer.Manager object and assign it to the SVG root element. Then one appropriate Hammer.Recognizer object was added for each gesture type needed in the application. That's why we only have a constant number of event listeners regardless of the number of table rows and operations.

Each DOM element that should really process Hammer events has to get two things:

  • Self-defined class names are added that indicate types of touch gestures.
  • An event listener is added for handling one self-defined custom event.

Finally we implemented a new object called EventDispatcher, which listens to all events triggered from the single Hammer.Manager object by searching for the actually targeted DOM element and triggers the custom event to it, while the ‘detail’ attribute of the custom event object holds the original Hammer event object.

In addition, the refined approach now enables us to implement supplemental behaviors at a central point:           
  • We handle the ‘swipe’ touch event by adding a recurrent timer in order to scroll the chart after the end of the touch gesture. We decided to use the Hammer.PanRecognizer object instead of the Hammer.SwipeRecognizer object, because the latter does not provide a ‘swipeend’ event.   
  • We avoid the so-called ghost click problem that is unfortunately not handled inside Hammer.js by forbidding a second mouse click event after a touch tap event. For an explanation of the issue see the end of this page: http://hammerjs.github.io/tips/. But the solution provided there ended in more event listeners, which would be counterproductive. 

Further Considerations for Web Applications

Within our control add-in we also distinguish between mouse and touch events in cases that require a different behavior: 

  • Dragging an operation by mouse can be done immediately after pressing the left mouse button. For doing the same by finger we defined the ‘press’ touch gesture to be carried out before dragging in order to differentiate dragging the operation from scrolling the chart.         
  • Since hovering can't be done by finger, we searched for a way to show tooltips. Safari on iOS and Chrome implement the following default behavior: When a hover style is defined in CSS or a ‘mouseover’ event listener is added to a DOM element, the first tap results in the hovering simulation whereas the second tap then simulates a mouse click. This increases the compatibility of tablets with web sites as opposed to web applications. Since we try to avoid mouse simulation of the browser we had to implement a similar feature by ourselves where appropriate.     
  • Tap events did not trigger on small DOM elements like the circles for showing/hiding the capacity curve and the arrows for expanding/collapsing groups. So we had to extend the hittable rectangles only for touch tap events, but not for mouse click events.

Web Browser and Platform Specifics

Although it seems to be a brave new world where the handling of touch events is concerned, weird issues came up when using different web browsers. We found out that it is always a bad idea to implement conditional code and styles dependent on browser, platform or device characteristics. Often it is better to find a constellation of HTML, JavaScript, and CSS that works for all devices. This may take time, but it is worth it.

In order to debug problems that only occur within the Tablet Client of NAV you will have to use the ‘tablet.aspx’ web page for NAV developers instead, because you cannot debug published apps on the iPad for instance. Caution: A web application running inside Safari on the iPad can only be debugged when the iPad’s USB cable is plugged into a Mac.

For the Web Client of NAV we also had to debug on a Windows machine with a touch monitor and several web browsers. We found out that Firefox has given up on supporting touch events for the time being, whereas the Internet Explorer 11 does not provide the right number of fingers that the monitor is able to recognize at once (navigator.maxTouchPoints), while Chrome does it right! So it is critical to detect the availability of a touch device on the Windows platform (see http://stackoverflow.com/questions/4817029/whats-the-best-way-to-detect-a-touch-screen-device-using-javascript). Windows is also the only operating system providing mouse and touch functionality simultaneously. The user can use both input devices, this being tricky for an application by means of implementing intuitive interactions for both types of input. 

Conclusion

Handling touch events is not as easy as is proclaimed on some web sites since we develop a full-blown application that in theory should run platform-independent. Each touch gesture has to be handled being aware of conflicts with other gestures and/or other visual elements in the neighborhood. The web browsers trigger touch events differently and simulate mouse events, which can have negative side effects.There's no guarantee that a behavior's functioning remains the same after a web browser update. We found out that it is helpful to use a JavaScript library like Hammer.js in order to abstract from browser capabilities. After all, our control add-in gained intuitive touch interactions in addition to the usual mouse interactions. You can see the success of our efforts when using the current version 1.1 of VPS Web Client for Microsoft Dynamics NAV. We will go on in providing the best possible user experience for our customers.

Your next steps

Are you interested in learning more about the Visual Production Scheduler Web Client? Request your demo now!

Overview of this blog series about lessons learned with the development of a JavaScript add-in for Microsoft Dynamics NAV: