Wednesday, 13 August 2025

A Guide to Prompt Engineering

                             Imagine you're trying to describe a complex idea to a new team member. The first time, you get a blank stare. The second, you get a slightly better look. By the third time, after you've refined your explanation, they finally "get it." This is a lot like talking to an AI. Your first prompt might get you a response that's technically correct but completely misses your intent. Your second might be a little closer. But the third, a carefully crafted prompt will get you exactly what you were looking for.

This process of refining your communication with an AI is called prompt engineering. It’s the essential skill for getting the most out of large language models (LLMs). In this guide, we'll break down what prompt engineering is, why it's so important for anyone working with AI and provide you with actionable techniques, including a powerful template that you can start using today to get better, more reliable results from your AI interactions.

What Is Prompt Engineering, Anyway?

Think of a large language model (LLM) like a super-smart, eager-to-please intern. It has access to an incredible amount of information, but it needs clear, precise instructions to do its job well.

Prompt engineering is simply the art of crafting these effective instructions. It's the skill of giving an AI model the right context, constraints and direction to get a high-quality, predictable output. It's the difference between a messy first draft and a polished, ready-to-go final product.

Why Bother Learning This?

You might be asking, "Why can't the AI just figure it out?" Good question. The truth is, these models are sophisticated pattern-matching machines. They predict the next word in a sequence based on probability. A vague prompt can be interpreted in a dozen different ways leading to:

  • Irrelevant Answers: The AI misunderstands your intent and goes off on a tangent.

  • Low-Quality Content: Without specific instructions, the AI defaults to the most common and often most boring answer.

  • Wasted Time: You end up spending more time editing the AI's output than you would have spent writing it yourself.

By learning prompt engineering, you’re not just using the tool, you're mastering it. You're moving from a casual user to a power user.

Actionable Techniques You Can Use Today

Ready to get started? Here are some of the most effective techniques to improve your prompts immediately.

1. Be Specific and Direct

This is the golden rule. Vague prompts lead to vague answers. The more detail you provide, the better.

Bad Prompt: Write about the problems with modern software. 

This is way too broad. What problems? For whom? What kind of software?

Good Prompt: Write a brief, two-paragraph explanation for a non-technical manager about the common challenges of integrating legacy systems with new cloud-based applications. Use simple language and focus on the business impact of these challenges. 

Here, we've specified the audience, the topic, the length and the key focus. The AI knows exactly what to do.

2. Give the AI a Role

By assigning a specific persona or role to the AI, you can drastically change the tone, style and content of its response.

Example:

  • You are a senior software architect.

  • You are a performance testing expert.

  • You are a journalist writing a news headline.

This simple framing technique helps the AI access the right style and expertise from its vast training data.

3. Use Delimiters for Clarity

For longer or more complex prompts, it's easy for the AI to get confused about what's an instruction and what's data. Delimiters—like triple quotes ("""), XML tags (<data>) or even just a simple heading—can help separate these parts.

Example:

Your task is to summarize the following text into three key takeaways.

Text: """A recent study on microservices architecture showed a 
significant increase in development velocity but also a rise in 
operational complexity. The study found that teams using a distributed
 system required more robust monitoring and logging tools to maintain 
service reliability. However, the ability to independently deploy 
services led to faster feature delivery."""

This simple formatting ensures the AI knows exactly which part of the prompt is the text to be processed.

4. Specify the Output Format

Don't leave the output format up to chance. If you need a bulleted list, a JSON object or a markdown table, just ask for it.

Example: Create a JSON object from the following data, with keys for 'project_name', 'status' and 'due_date'.

This is especially powerful when you're using AI to generate data for a script or an application.


The Magic Prompt Template

Now, let's put it all together into a reusable template you can copy and paste. This template combines all the best practices we’ve discussed and will instantly upgrade your prompts. Just fill in the blanks!

Understanding Model Context Protocol (MCP)

                             Have you ever noticed that even the smartest AI models sometimes seem to be operating in a vacuum? They're brilliant at answering a single question, but ask them a follow-up about the document they just summarized or the file you just opened and they have no clue. It's because they're missing a critical piece of the puzzle: a standardized way to access and understand the world of information beyond their own neural networks.

This isn't about memory management, which is important, but it’s about a much bigger challenge: connecting the LLM to the real-world tools, files and data that developers use every day. This is the core problem the Model Context Protocol (MCP) was built to solve. It's not just a set of rules for conversation; it's a blueprint for a whole new kind of architecture that links AI models directly to your data and tools.

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open standard that gives LLMs access to a wide variety of external contexts. Think of it as a universal language for AI integrations. Just like a USB-C port provides a standardized way to connect different devices—a monitor, a keyboard, or an external hard drive—MCP provides a standardized way to connect an AI application to tools, resources and prompts.

The ultimate goal is to break down the "information silos" that have historically isolated AI models, enabling them to build complex workflows and solve real-world problems.


The MCP Architecture: A Client-Server Model

The architecture of MCP is surprisingly straightforward and follows a classic client-server model. It’s not just a single application; it's a system of connected components that work together to provide context to the LLM.



Let's break down the key participants:

1. The MCP Host (The AI Application)

This is your AI-powered application—like an IDE with an integrated AI assistant, a desktop application or even a web-based chatbot. The host is the orchestrator, it coordinates and manages the entire process. It's the "client" in the client-server relationship, but it's more than that—it’s the interface the user interacts with.

2. The MCP Client (The Connector)

The MCP client is a component that lives within the MCP host. Its sole job is to maintain a connection to an MCP server and facilitate the exchange of information. The host uses the client to discover what capabilities (tools, resources, etc.) are available on the server.

Thursday, 20 March 2025

Understanding the Internals of an AI Chatbot

                                 AI chatbots have become an integral part of our digital experience, assisting users in customer support, content generation, and even general conversations. But have you ever wondered how these chatbots work behind the scenes? 

In this article, we’ll break down the internals of an AI chatbot, covering its key components and how it processes user inputs to generate meaningful responses.


Core Components of an AI Chatbot:

An AI chatbot comprises of several key components that work together to understand and generate human-like responses.

  • Natural Language Processing (NLP):  NLP is the backbone of chatbot intelligence. It enables the bot to understand, interpret and generate human language. NLP is composed of several subcomponents:

    • Tokenization: Breaking down sentences into individual words or tokens.
    • Part-of-Speech Tagging (POS): Identifying the grammatical type of each word.
    • Named Entity Recognition (NER): Recognizing important entities like names, dates and locations.
    • Sentiment Analysis: Detecting the emotion behind the text

  • Machine Learning Models: Most modern chatbots use machine learning models trained on large datasets of conversations. These models help the bot learn context, grammar and response generation. Some popular models are:

    • Rule-Based Models: These respond to inputs based on defined patterns/keywords.
    • Retrieval-Based Models: Selecting the best response from a set of predefined responses.
    • Generative Models: AI models such as GPT that generate responses dynamically based on context.

  • Dialog Management System: This system make sure that conversations flow logically. It keeps track of context, user preferences and previous interactions to provide clear and relevant responses.

  • Backend & APIs: Backend consists of servers, databases and APIs that store conversation history, integrate with external systems and process requests efficiently.


How an AI Chatbot Processes a User Query:

When an user interacts with chatbot, the below process happen behind the scenes:

  • User Input: The user types a query.
  • Preprocessing: The input text is cleaned and prepared.
  • NLP Understanding: The chatbot extracts intent, context and key information.
  • Passing Input to LLM: The processed text is sent to the LLM (Large Language Model).
  • LLM Generates a Response: The model predicts the most relevant response using deep learning techniques.
  • Postprocessing: The generated response is refined.
  • Sending the Response: The chatbot displays the response to the user.



Challenges in Building an AI Chatbot:

Developing a AI chatbot comes with lot of challenges like:

  • Understanding Complex Queries: Handling ambiguous or multi-intent queries.
  • Context Retention: Maintaining long-term conversational context.
  • Bias and Ethical Concerns: Avoiding biases in response and ensuring responsible AI use.
  • Integration with External Systems: Seamless connection with databases and APIs for accurate and relevant responses.

Future of AI Chatbots:

With advancements in deep learning, AI chatbots are becoming more sophisticated. Future trends include:
  • More Human-Like Conversations: Improved emotional intelligence and personalization.
  • Multimodal Capabilities: Combining text, voice and images for better interactions.
  • Autonomous AI Agents: Self-learning bots that adapt to user preferences dynamically.




Thursday, 27 February 2025

Techniques to Improvise LLMs and thier differences

                                                 As large language models (LLMs) keep transforming natural language processing (NLP), their capabilities can be further enhanced through specialized techniques that enhance accuracy, flexibility, and contextuality to boost their capabilities further. Although LLMs are strong enough by themselves, augmenting techniques such as Retrieval-Augmented Generation (RAG), fine-tuning, and other advanced methodologies can optimize the performance for targeted applications. These methods enable models to tap external knowledge, adapt to new domains, and return more accurate, contextually intelligent outputs. 

This blog discusses multiple methods, including RAG, CAG (Context Augmented Generation), KAG (Knowledge Augmented Generation), and fine-tuning by which LLMs can be extended and broadened in function, with descriptions of how each works and under what circumstances best to use them.










Friday, 24 January 2025

Large Language Models - LLM on Local Machine

                        Large Language Models (LLMs) have revolutionized AI by enabling machines to understand and generate human-like text. While major companies are offering these models over cloud, many users are exploring the benefits of running LLMs on locally on their machines, especially when privacy, cost-efficiency, and data control are key considerations. Running LLMs traditionally requires powerful GPUs, tools like Ollama make it possible to run models locally on your machine, even with just a CPU.

Let's explore how to use Ollama on a local machine via the command line (CLI), without writing any code, and discuss the advantages of running LLMs locally using only CPUs.


Why run LLMs locally on your CPU?

Running LLMs on a CPU offers several key benefits, especially when you do not have access to high-end GPUs:

1. Cost Efficiency

  • Cloud-based APIs can incur significant costs, particularly if you're running models regularly. By running the model locally, you eliminate the costs.

2. Privacy and Security

  • Keeping all data processing local ensures that sensitive information doesn’t have to leave your machine, protecting your privacy and offering full control over your data.

3. Flexibility and Control

  • Running an LLM locally on your machine gives you the freedom to customize it for specific use cases without being constrained by cloud service terms or API limitations. You can use it in your preferred workflow and modify it as needed.

4. Zero Latency

  • By using a local installation, you avoid delays that come from network calls to cloud services, giving you near-instant access to the model.

Ollama


Ollama is a simple, user-friendly tool that allows you to run pre-trained language models locally on your machine. Ollama is optimized for both CPU and GPU usage, meaning you can run it even if your machine doesn’t have powerful GPU hardware. It abstracts the complexity of setting up models and running them, offering a clean command-line interface (CLI) that makes it easy to get started.

Setting up Ollama:

Step 1:
  • Visit the Ollama website to download the installer for your platform (Windows, macOS, or Linux).
  • Follow the installation instructions provided for your operating system.
Once installed, you’ll have access to the ollama command directly in your terminal.

Step 2: 
  • Open the terminal and check the ollama installation with the below command
    • ollama --version

      This confirms that ollama is installed properly.
  • Pull any locally deployable model like llama 3.2 
    • ollama pull llama3.2:1b 


  • List the models downloaded in the local system
    • ollama list

  • Run the model and start the conversation with ollama
    • ollama run modelname


Useful Ollama Commands:


Comamnd Description
ollama serve Starts Ollama on your local system.
ollama show Displays details about a specific model, such as its configuration and release date
ollama run Downloads the specified model to your system.
ollama list Lists all the downloaded models
ollama ps Shows the currently running models
ollama stop Stops the specified running model
ollama pull Pulls the specified model to the local system
ollama rm Removes the specified model from your system
bye Exit the ollama conversation



Challenges and Limitations of running LLMs locally on CPUs

While running LLMs locally with Ollama has many advantages, there are some limitations to keep in mind:

  • Performance on CPUs: Running large models on CPUs can be slower than using GPUs. Although Ollama is optimized for CPUs, you may still experience slower response times, especially with more complex tasks.

  • Memory Usage: LLMs can consume a lot of memory, and running them locally may require a significant amount of RAM. Ensure that your machine has at least 16 GB of RAM for decent performance. Larger models will require more memory, which could lead to slowdowns or crashes on systems with limited resources.

  • Model Size: Some larger models, such as GPT-3, may not be practical to run on a CPU due to their massive size and resource requirements. Ollama offers smaller models that are more feasible to run on CPU-based machines, but for the largest models, you may still need a GPU for optimal performance.


Conclusion:

Ollama makes running LLMs locally on a CPU simple and accessible, offering an easy-to-use CLI for tasks like text generation and question answering, without needing code or complex setups. It lets you run models efficiently on various hardware, providing privacy, cost savings, and full data control.




Friday, 4 March 2022

How to Deploy Selenium grid on AWS / Amazon EKS

  


                                    Selenium Grid is good for parallel execution but maintenance is a nightmare in an era where you see a frequent upgrades to browsers and corresponding drivers. No sooner the usage of automation framework / selenium grid increases, scalability and maintenance becomes a challenge. To address such issues, we do have solutions based on dockers, docker swarm etc. Having said that, there are some caveats in scaling, managing container health etc.

Below solution would try to address most of them. Major chunks of the solution include Selenium, Zalenium, Docker, Kubernetes and Amazon EKS.

This article would outline the process of deploying Selenium grid(Zalenium) on AWS (Amazon EKS) using Kubernetes and Helm.

What do we achieve with this setup..?

  • Scalability: EKS can scale the nodes and pods as per the given configuration.
  • Visibility: Zalenium provides a feature to view the live executions on the containers.
  • Availability: Amazon EKS cluster makes selenium grid available all the time.
  • Maintenance: Low maintenance as the containers are destroyed after each execution.

Pre-requisites:

  • An active Amazon AWS account.
  • IAM user is created in AWS account
  • AWS CLI is connected to AWS account providing the user credentials using local powershell or any terminal
                                                                OR
  • Use AWS cloudshell which is automatically connected to logged in account.
  • Install AWS CLI (for local terminal), kubectl, helm in the given order.

Lets Get Started!

Once the above pre-requisites are met, next task to deploy any application on kubernetes is to create a kubernetes cluster. There are different ways to create a cluster on AWS, I'll brief couple of ways to achieve the same.

First, Create cluster from AWS GUI.

1. Create master node or cluster 
  • Open Amazon EKS console
  • Choose Create Cluster
  • Provide details like cluster name, k8s version, role
  • Select VPC, security groups, endpoint access
  • Further steps as shown on GUI which will make 'master' ready.
2. Create worker nodes and connect to the above created cluster.
  • Create Node Group (Amazon EC2) instances.
  • Choose the cluster, to which the above node group should get attached.
  • Select security group, resources etc.,
  • Define min and max number no. of nodes.
Sounds complex?. No issues, there is another simple and efficient way to make the whole process look simple.

Second, Create cluster using eksctl (The official CLI for Amazon EKS)

The above complex task can be achieved with a single command.

Wednesday, 23 February 2022

How to Deploy Selenium grid on Google Cloud Platform using Kubernetes and Helm

                                          



                                        It is a open secret that Selenium Grid maintenance is a nightmare in an era where you see a frequent upgrades to browsers and corresponding drivers. No sooner the usage of automation framework / selenium grid increases, scalability and maintenance becomes a challenge. To address such issues, we do have solutions based on dockers, docker swarm etc. Having said that, there are some caveats in scaling, managing container health etc.

Below solution would try to address most of them. Major chunks of the solution include Selenium, Zalenium, Docker, Kubernetes and Google Cloud Platform.

This article would outline the process of deploying Selenium grid(Zalenium) on Google cloud platform using Kubernetes and Helm.

What do we achieve with this setup..?

  • Scalability: GCP/GKE can scale the nodes and pods as per the given configuration.
  • Visibility: Zalenium provides a feature to view the live executions on the containers.
  • Availability: GKE kubernetes cluster makes selenium grid available all the time.
  • Maintenance: Low maintenance as the containers are destroyed after each execution.

Pre-requisites:

  • An active Google Cloud Platform account
  • Enable Kubernetes engine by setting up a Billing Account.

Friday, 18 February 2022

How to setup Selenium Grid using Docker Desktop - Windows

                            Selenium Grid has made the test automation execution much faster and smarter. The excessive usage of selenium grid has its own hitches like too much of system resource utilization, all browsers to be installed on the system etc. 

With the introduction of docker images for selenium, made the life of testers lot easier. Out of multiple ways to bring up selenium grid with docker, I would choose the simple & quick way to bring up using docker-compose.

Pre-requisite: Docker desktop for windows is installed.


Steps to bring up Selenium Grid:

  • Pull required version of selenium hub and nodes using the command "docker pull" as below

         docker pull selenium/node-chrome-debug:3.141.59 

  • Create a yaml file with the below docker-compose instructions.
docker-compose file

  • run the command 
         docker-compose -f docker-compose.yml up -d

Thursday, 23 September 2021

Dockers and Kubernetes Cheat sheet

Below are list of frequently used commands. This would come very handy while working with docker & kubernetes.

Docker & Kubernetes Commands 



Thursday, 12 August 2021

How to create and execute Jmeter script using Java


Performance Test!. When we say this term, one of the first things that gets into our mind is 'Jmeter'.

           Jmeter is the go-to tool for the performance testing needs in open source community. It is built completely using Java, designed to perform load test and measure performance. It can simulate load on a server, group of servers, network to check the threshold limit and analyze performance under different types of loads. Vast list of plug-ins which extends jmeter capabilities and making it handle most of the performance test requirements.

Mostly the Jmeter GUI is used to create the scripts, configure the users, capture other details and execute the scripts. But, when it comes to integrate the performance tests to code driven automation frameworks, one has to switch to Jmeter GUI to create scripts & fallback to framework to execute the jmeter scripts. In order to make the integration seamless, Jmeter scripts can be created & executed during runtime using code driven framework. Below code snippet would lead you to achieve the same.

Below snippet will let you create a jmeter script (jmx) for a webservice by adding minimal elements to test plan. Typical hierarchy of a web request in a jmx would be as below:

 Test Plan 
à Thread Group Ã  Sampler Ã  Assertions Ã  Listeners 

Steps:

  • Create a maven project through eclipse or any IDE.
  • Add the below Jmeter dependencies in your POM file:
    • ApacheJMeter_core
    • ApacheJMeter_components
    • ApacheJMeter_http
    • jorphan
    • ApacheJMeter_java
  • Create a class named "APITest" and copy the below code snippet
  • Change the service details & file locations accordingly.
  • DONE..! You are all set to create and run the jmeter script from java.

import java.io.File;
import java.io.FileOutputStream;
import org.apache.commons.io.FileUtils;
import org.apache.jmeter.config.Arguments;
import org.apache.jmeter.config.gui.ArgumentsPanel;
import org.apache.jmeter.control.LoopController;
import org.apache.jmeter.control.gui.LoopControlPanel;
import org.apache.jmeter.control.gui.TestPlanGui;
import org.apache.jmeter.engine.StandardJMeterEngine;
import org.apache.jmeter.protocol.http.control.gui.HttpTestSampleGui;
import org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy;
import org.apache.jmeter.report.config.ConfigurationException;
import org.apache.jmeter.report.dashboard.ReportGenerator;
import org.apache.jmeter.reporters.ResultCollector;
import org.apache.jmeter.reporters.Summariser;
import org.apache.jmeter.save.SaveService;
import org.apache.jmeter.testelement.TestElement;
import org.apache.jmeter.testelement.TestPlan;
import org.apache.jmeter.threads.ThreadGroup;
import org.apache.jmeter.threads.gui.ThreadGroupGui;
import org.apache.jmeter.util.JMeterUtils;
import org.apache.jorphan.collections.HashTree;

public class APITest {
	
    public void createAndExecute() {
		File jmeterHome = new File("C:/apache-jmeter");
		try {
			if (jmeterHome.exists()) {
				// JMeter Engine
				StandardJMeterEngine jmeter = new StandardJMeterEngine();
				setconfig(jmeterHome, "./htmlreportsdir");

				// JMeter Test Plan, basic all u JOrphan HashTree
				HashTree testPlanTree = new HashTree();
				// HTTP Sampler
				HTTPSamplerProxy httpSampler = new HTTPSamplerProxy();
				httpSampler.setName("HTTP Sampler");
				httpSampler.setProtocol("https");
		        httpSampler.setDomain("testjmeter.com");
		        httpSampler.setPort(8080);
		        httpSampler.setPath("/getservicepath");
		        httpSampler.setMethod("GET");
		        httpSampler.setProperty(TestElement.TEST_CLASS, HTTPSamplerProxy.class.getName());
		        httpSampler.setProperty(TestElement.GUI_CLASS, HttpTestSampleGui.class.getName());
		        httpSampler.setEnabled(true);
		        
		        httpSampler.addArgument("Arg1","val1");//Arguments
		        httpSampler.addArgument("Arg1","val1");//Arguments
		        
		        httpSampler.addNonEncodedArgument("", "serviceBody", "=");//payload
	        	httpSampler.setPostBodyRaw(true);
	        	
				//Loop Controller
				LoopController loopController = new LoopController();
		        loopController.setLoops(1);
		        loopController.setFirst(true);
		        loopController.setProperty(TestElement.TEST_CLASS, LoopController.class.getName());
		        loopController.setProperty(TestElement.GUI_CLASS, LoopControlPanel.class.getName());
		        loopController.initialize();
		        
				//Thread Group
				ThreadGroup threadGroup = new ThreadGroup();
		        threadGroup.setName("API Thread Group");
				threadGroup.setNumThreads(20); //Users
		        threadGroup.setRampUp(10); //Seconds
		        threadGroup.setSamplerController(loopController);
		        threadGroup.setProperty(TestElement.TEST_CLASS, ThreadGroup.class.getName());
		        threadGroup.setProperty(TestElement.GUI_CLASS, ThreadGroupGui.class.getName());
		        
		        threadGroup.setIsSameUserOnNextIteration(true);
		        threadGroup.setScheduler(false);
				
				//Test Plan
				TestPlan testPlan = new TestPlan("JMeter Script From Java Code");
		        testPlan.setProperty(TestElement.TEST_CLASS, TestPlan.class.getName());
		        testPlan.setProperty(TestElement.GUI_CLASS, TestPlanGui.class.getName());
		        testPlan.setUserDefinedVariables((Arguments) new ArgumentsPanel().createTestElement());
				
				//Construct Test Plan from previously initialized elements
				testPlanTree.add(testPlan);
				HashTree threadGroupHashTree = testPlanTree.add(testPlan, threadGroup);
				threadGroupHashTree.add(httpSampler);

				// save generated test plan to JMeter's .jmx file format
				String jmxFilePath = "./jmxfiles/TestAPI.jmx";
				SaveService.saveTree(testPlanTree, new FileOutputStream(jmxFilePath));

				// add Summarizer output to get test progress in stdout like:
				String jtlFilePath = ".jtlFiles/TestAPI.jtl";
				ReportGenerator reportGenerator = setReportInfo(testPlanTree, jtlFilePath);
				
				//Run Test Plan
				jmeter.configure(testPlanTree);
		        jmeter.run();
		        
				// Report Generator
				FileUtils.deleteDirectory(new File("./htmlreportsdir"));// delete old report
				FileUtils.deleteDirectory(new File("./reportsdir"));// delete old report
				reportGenerator.generate();

				System.out.println("Test completed. See " + jtlFilePath + " file for results");
				System.out.println("JMeter .jmx script is available at " + jmxFilePath);
				
			} else {
				System.out.println("Jmeter Home not found..");
			}

		} catch (Exception e) {
			System.out.println(e.getMessage());
		}
	}
    
	public void setconfig(File jmeterHome,String htmlrepDir){
		File jmeterProperties = new File(jmeterHome.getPath() +"/bin/jmeter.properties");
        //JMeter initialization (properties, log levels, locale, etc)
        JMeterUtils.setJMeterHome(jmeterHome.getPath());
        JMeterUtils.loadJMeterProperties(jmeterProperties.getPath());
        JMeterUtils.initLocale();
        
        //Set directory for HTML report
        JMeterUtils.setProperty("jmeter.reportgenerator.exporter.html.property.output_dir",htmlrepDir);
	}
	
	
	public ReportGenerator setReportInfo(HashTree testPlanTree,String jtlFilePath) throws ConfigurationException{
		Summariser summer = null;
        String summariserName = JMeterUtils.getPropDefault("summariser.name", "summary");
        if (summariserName.length() > 0) {
            summer = new Summariser(summariserName);
        }
        
        // Store execution results into a .jtl file
        File logFile = new File(jtlFilePath);
        //delete log file if exists
        if (logFile.exists()){
            boolean delete = logFile.delete();
            System.out.println("Jtl deleted: " + delete);
        }
        
        //Summary Report
        ResultCollector logger = new ResultCollector(summer);
        logger.setEnabled(true);
        logger.setFilename(logFile.getPath());
        //creating ReportGenerator for creating HTML report
        ReportGenerator reportGenerator = new ReportGenerator(jtlFilePath, logger); 
        testPlanTree.add(testPlanTree.getArray()[0], logger);
         
	return reportGenerator;
	}
		
}

Friday, 4 June 2021

How to establish a connection to Remote Host & Copy file

Many times we do get the requirement, to connect a remote server & perform operations like copying a file from local to remote, vice versa and many others. But, have you ever thought how do we do that through java.? Yes, We can. Java has multiple libraries which can perform this task. One such java library which can perform this task is Jcraft's 'JSch (Java Secure Channel)' . 

Below example will let you know how to connect a remote host & perform copy operations.


public void copyFiletoRemotehost(String username,String pwd,String[] remotehosts, String port) {
        Session jschSession = null;
        int SESSION_TIMEOUT = 10000;
        final int CHANNEL_TIMEOUT = 5000;
        String REMOTE_HOST = "";
        int REMOTE_PORT = 22;
		
	String lFile = "C:/localhost/local.txt";
        String rFile = "/localtoRemote.txt";
        try {
            JSch jsch = new JSch();
            for(int i=0; i < remotehosts.length; i++) {
            	REMOTE_HOST = remotehosts[i].toString();
            	jschSession = jsch.getSession(username, REMOTE_HOST, REMOTE_PORT);

                //Remove known_hosts requirement
                java.util.Properties config = new java.util.Properties();
                config.put("StrictHostKeyChecking", "no");
                jschSession.setConfig(config);
                
                //authenticate using password
                jschSession.setPassword(pwd);
                jschSession.connect(SESSION_TIMEOUT);
                Channel sftp = jschSession.openChannel("sftp");
                sftp.connect(CHANNEL_TIMEOUT);

                ChannelSftp channelSftp = (ChannelSftp) sftp;
                channelSftp.put(lFile, rFile);
                channelSftp.exit();
                
                //Copy from one location to another with remote server
                copyFile(jschSession);
                jschSession.disconnect();
            }
        } catch (JSchException | SftpException e) {
            e.printStackTrace();
        } finally {
            if (jschSession != null) {
                jschSession.disconnect();
            }
        }
    }
    
    /**
    * This method will let you copy from one folder to another within the remote server
    */
    public static void copyFile(Session jschSession) throws JSchException {
	String cmd1 = "cp C:/username/localtoRemote.txt  C:/anotherfolder/localtoRemote.txt";
	Channel exec = jschSession.openChannel("exec");
		
	ChannelExec channelExec = (ChannelExec) exec;
	channelExec.setCommand(cmd1);
	exec.connect(CHANNEL_TIMEOUT);
	channelExec.setErrStream(System.err);
   }

References:
http://www.jcraft.com/jsch/

Saturday, 18 July 2020

Hybrid App Automation using Appium

In the Mobile world, we see multiple types of apps namely Native App, Hybrid App & the browser based application, Web App. Also, we see multiple types of operating systems like Android, iOS, Windows, Firefox OS etc.

Automated testing of these many variation of apps, is very tricky & difficult task. Not all of them are supported by single automation tool. Having said that, Appium which is built over Selenium webdriver supports most of the OS & apps with various configurations.

In this article, you would know a way to automate the Hybrid App using AndroidDriver.

Prerequisites:

  1. Android SDK is downloaded and installed. (ANDROID_HOME env. variable should be set)
  2. Either Emulator or  Real device (USB debugging enabled) is available.
  3. Appium desktop app or non GUI package is installed. 

Steps:

  1. Create a java project in eclipse 'AppiumMobileTesting'.
  2. Create a class 'TestHybridApp.java'.
  3. Connect your device(USB debugging enabled), to system's USB port & accept if any confirmation pop ups.
  4. Start Appium Server. Once the device is connected, open the 'Appium Inspector Session' providing the suitable capabilities.
  5. Open the Hybrid app in the device and you see the same screen on inspector session.
  6. Click on the required control in inspector session & then the related locator details are shown on the right hand side in appium inspector.

Friday, 6 March 2020

How to query or filter Json using RestAssured JsonPath


REST Assured(RA) is a framework built on Java to test the REST services. It supports any HTTP method but has explicit support for GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH and includes specifying & validating like Headers, Cookies etc.

For validating and querying the Webservice responses, it has couple of libraries 'JsonPath' and 'XMLPath' for parsing Json and XML responses accordingly.

Even-though we know that, we have to use these libraries but most of the times we get into situations where we forget or we are not sure of the actual syntax. Here are list of few scenarios and it's JsonPath syntax (XMLPath syntax are almost similar) for your reference.

All the Jsonpath examples in this post, use the following Json:

 
{
   "store":{
      "book":[
         {
            "category":"reference",
            "author":"Nigel Rees",
            "title":"Sayings of the Century",
            "price":8.95
         },
         {
            "category":"fiction",
            "author":"Evelyn Waugh",
            "title":"Sword of Honour",
            "price":12.99
         },
         {
            "category":"horror",
            "author":"Herman Melville",
            "title":"Moby Dick",
            "isbn":"0-553-21311-3",
            "price":8.99
         },
         {
            "category":"fiction",
            "author":"J. R. R. Tolkien",
            "title":"The Lord of the Rings",
            "isbn":"0-395-19395-8",
            "price":22.99
         }
      ],
      "bicycle":{
         "color":"red",
         "price":19.95
      }
   },
   "city":"Bangalore"
}



Following examples let you know,how to effectively use the JsonPath to extract the required values from the RESTful json in different scenarios:

Thursday, 18 April 2019

How to send Cookies from Web(GUI) to Web Service


Cookies: Browser Cookies are also referred as Internet Cookie,Web Cookie or HTTP Cookie. A cookie is a "small piece of data" sent from the website to the user's device while the user is accessing the website.
There are multiple ways these cookies are being used in modern world like authentication, browser state information, storing user personal information like name, address etc., tracking the user activity on web, etc., Different types of cookies satisfy different needs.

In certain cases, one might have to test web services which needs cookies to be passed as part of request. One way to address this, is to combine Web application plus web service test steps. 

Below code snippet would open the web application, capture the cookies stored by the application and pass it to web service using Rest Assured.


public class CookiesToWebServices {
static WebDriver driver;
static String driverPath = "C:/Drivers/chromedriver.exe";

public static void main(String[] args) {

System.setProperty("webdriver.chrome.driver",driverPath);
ArrayList cookieList = new ArrayList();

String url = "www.testurl.com/testpath/id";
HashMap headermap = new HashMap<>();
headermap.put("Content-Type", "application/json");
headermap.put(....

HashMap queryParammap = new HashMap<>();
queryParammap.put("id", "12345");
queryParammap.put(....

Monday, 31 December 2018

How to Connect Zephyr For Jira Programmatically (ZAPI)

Jira by default is a 'Defect Management Tool'. However, available add-on for Jira make it much more capable than just defect Management tool. Couple of such add-ons like 'Zephyr For Jira' & 'ZAPI' (API to access data from 'Zephyr For Jira' programmaticallymake Jira capable for 'Test Management'. Both of these add-ons have different version for Server & Cloud accordingly. Let us look at, how do we work with ZAPI server version.


Use Case: Whenever the automation is run, it has to update the status of execution in test cycle.

Pre-requisite: Manual Tests are created in Jira with Issue Type - Test & Test cycle is created with tests added into it (Adding tests into test cycle creates execution ids for each test case in test cycle).

High Level Steps:
  1. Login to Jira using proper credentials.
  2. Provide the Cycle Id OR ProjectId,Version & Cycle Id of the Test cycle to ZAPI api.
  3. Update the Test case status in test cycle based on the execution id of each test case.

Example to Update Status of Test case execution Using ZAPI API:


This example shows how to consume ZAPI REST APIs from Java using REST Assured libraries.