Mobile Application testing (functionality and performance) Junit TestSuite

0

Self Advertisement
—–Start of Advertisement——-
BUILD CAR POOL SOLUTIONS ON ANY DEVICE TO RUN ANYWHERE (www.mcruiseon.com). Introducing mCruiseOn, the java library /json api’s that you can use to build a car pool solution. Be the next avego.com, carpooling.com, zimride.com. mCruiseOn is your one stop API on EC2.
——End of Advertisement——-

The world of mobile applications is growing into a new era, where large databases are being provisioned via webservices to enable increased productivity via mobility. These applications are following the 2 tier approach, of UI and a WebServices client layer.

It is becoming a common practice to use webservices and provide a seamless experience on the mobile application. Various techniques to fetch, pre-fetch, cache and lazy load are used to provide that seamless experience. Teams working across the world are more common. Webservices team are generally closer to business, closer to business data that is proprietary and heavily guarded. Client teams in-turn are located in offshore centers, and their development is provisioned by VPN’s, api stubs (dead data, until the webservice is developed).

Projects too have a common feel to it. Applications start their work with UI and write their api calls on dead data stubs. This happens in parallel to web service teams developing their api’s. Changes are communicated to the app team (hopefully such agreements have been reached). The app team develops their UI along side with dead data on api stubs.

Typically test teams too are integrated by now and are testing the UI (with the dead data). But it is interesting to know that test teams can contribute big time by helping eliminate a future threat to their productivity. The basis of their preparation can also be used to test the webservices for performance.

Think about it. If the test team was able to call a webservice api, check its return values and verify if the api’s are returning what they are expected to, at the right place. Now, the question that comes to your mind is, How will it help the team to test the webservice and its location of data. Well, let me explain.

A web service is rarely developed for a particular application. It is more commonly developed for a suite of applications across multiple domains. Every webservice undergoes change constantly with pressure from various applications. So its in their best interest to modify the location of data to suite the needs of all these applications. One change to the location of data, and it will have its side effect on all other teams.

The question is, why doesn’t the webservice team behave proactively and communicate to all its clients about these changes in advance. Well, most of it is political. They communicate to all teams they want to. They don’t communicate to teams they don’t want to.

Side effect, a client app is impacted, and the developer (oblivious to the fact that the god damn api changed), debugs his code and takes that 3-4 hours to figure out it was a api change.

Now, how about if there was a way to identify this change before that developer sits to debug.

Answer is simple. Junit.

Lets collect our tools

  • Eclipse
  • Fiddler (download it, install it)
  • API Documentation
  • Split the API’s into 2 parts
    • dont need login
    • b) need a login
  • Call a api that does not need a login
  • Fire it on Fiddler, copy the JSON (assuming JSON) response
  • Now, using http://code.google.com/p/jsonschema2pojo/ , Hit the http://jsonschema2pojo.org/
  • Paste the json raw response and hit “jar”. Download the jar file.
  • All your files containing the pojo’s will be ready for your use

Now, you need to write your Hello World

File->New->Junit TestCase->In your @Test method write

		JsonNode pubInfo = null ;
		Iterator pubDetails ; 
		HttpURLConnection connection ;
		ObjectMapper mapper = null ;
		JsonNode responseJson = null ;
		try {
			URL url = new URL(
					"Your URL Here");
			connection = (HttpURLConnection)url.openConnection() ;
			connection.setDoInput(true);
			connection.setDoOutput(true);
			connection.setRequestMethod("POST") ;
			connection.setRequestProperty("Content-Type", "application/json") ;
			// and other configuration if you want, timeouts etc
			// then send JSON request
			SpecialBooks request = new SpecialBooks(); // POJO with getters or public fields
			mapper = new ObjectMapper(); 
			mapper.writeValue(connection.getOutputStream(), request);
			// and read response
			responseJson = mapper.readTree(connection.getInputStream());
		} catch (JsonGenerationException e) {
			e.printStackTrace();
			fail() ;
		} catch (JsonMappingException e) {
			e.printStackTrace();
			fail() ;
		} catch (IOException e) {
			e.printStackTrace();
			fail() ;
		}
		assertNotNull(responseJson) ;
		JsonNode pubs = responseJson.get("SpecialBooks") ;
		assertNotNull(pubs) ;
		pubInfo = pubs.get("BookInformation") ;
		assertNotNull(pubInfo) ;
		pubDetails = pubInfo.elements() ;
		assertNotNull(pubDetails) ;
		assertNotNull(pubDetails.next().get("Size").toString().contains(" MB")) ;

objectInputStream = new ObjectInputStream(new BufferedInputStream(inputStream)) hangs, also readObject hangs

0

Problem 1  Hangs at

objectInputStream = new ObjectInputStream(new BufferedInputStream(inputStream))

I was trying to figure out why one of my responses from the server was getting delayed for about 3-4 seconds. Some googleing, hinted that I should try bufferedstreams. So I modified my code to
On the client side I have

objectOutputStream = new ObjectOutputStream(new BufferedOutputStream(outputStream));
// objectOutputStream.flush() ;
objectInputStream = new ObjectInputStream(new BufferedInputStream(inputStream));

On the server side I  have

inputObjectStream = new ObjectInputStream(new BufferedInputStream(clientSocket.getInputStream()));
outputObjectStream = new ObjectOutputStream(new BufferedOutputStream(clientSocket.getOutputStream()));

I started to get hangs on the client side at

objectInputStream = new ObjectInputStream(new BufferedInputStream(inputStream));

The fix to this is 2 fold
1) Create your OutStreams before your Instream
2) And flush your OutStream before createing your Instream

On the client side I have

objectOutputStream = new ObjectOutputStream(new BufferedOutputStream(outputStream));
objectOutputStream.flush() ;
objectInputStream = new ObjectInputStream(new BufferedInputStream(inputStream));

On the server side I  have

outputObjectStream = new ObjectOutputStream(new BufferedOutputStream(clientSocket.getOutputStream()));
outputObjectStream.flush() ;
inputObjectStream = new ObjectInputStream(new BufferedInputStream(clientSocket.getInputStream()));

Problem 2  Hangs at readObject

I realized that I was not flushing all my objectstreams after writeObject call. Its more important to flush after a writeObject when you start using a BufferedStream. Since this is the only way the readObject will know that no more is expected at this time.

Edited
So the flushing and BufferedIOStreams helped performance big time. But I realized that readObject was holding up the object, but comparing the in time of the response using wireshark. Smart move I must say :). Now that know the packet is waiting, I was able to narrow down to Serialization as the root cause. So, I removed the List, ArrayList, replaced them with simple []’s. And WOW, worked like a charm. Performance improved big big time, CPU utilization was down from 100% to 25% (at 50 clients at a time loads) and 50% with 100 clients concurrently. WOW, this is fun.

2 Takeaways
– Serialization is a killer, so stick to primitives as much as possible
– BufferedIOStreams and flushing at the right places

Someday, I will try externalization too. Hopefully that will further improve performance :). Will report back