Request for interview question for Testing (QA)
This is a discussion on Request for interview question for Testing (QA) within the Software Testing forums, part of the Archive category; Dear sir/madam, Can somebody please provide interview question for testing job. I also need questions for WinRunner, Load Runner and ...
09-07-2004, 03:54 AM #1
- Join Date
- Aug 2004
- Good Answers
- Agreed 0 Times with 0 answers
Request for interview question for Testing (QA)
Can somebody please provide interview question for testing job. I also need questions for WinRunner, Load Runner and Test Director. As much as possible, the answer along with the question will be greatly appreciated. But if for the time being the answers are not available, the questions it self will be of great help. Thanks !
09-07-2004, 04:01 AM #2
- Join Date
- Aug 2004
- Good Answers
- Agreed 0 Times with 0 answers
Re:Request for interview question for Testing (QA)
Please disregard my earlier postings. I found interview questions for the testing in \"Question Bank\". I want to thank who ever posted all these questions.
If possible, can somebody also post the answer to those questions which donot have answer to them yet. I would like to thank him/her in advance !
09-07-2004, 04:32 AM #3Member Array
- Join Date
- Nov 2004
- Good Answers
- Agreed 0 Times with 0 answers
Re:Request for interview question for Testing (QA)
wildkid98 , Please join yahoo groups and get the testing meterial there. You can find demo also. We will update the questions bank as we get ansers. If you have any please send them to us, We will update the database.
02-06-2007, 07:57 AM #4
Hi These are the interview questions and answers
For all tools
Interview Questions-General Testing Questions
1.What is 'Software Quality Assurance'?
Software QA involves the entire software development Process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Books section for a list of useful books on Software Quality Assurance.)
2.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
3. What are some recent major computer system failures caused by software bugs?
* Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. Studies were under way to determine which, if any, portions of the project could be salvaged.
* In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.
* Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers, and that the total cost of the incident could exceed $100 million.
* A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.
* According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.
* In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.
* A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders.
* News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.
* In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.
* News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.
* In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
* In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
* Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
* January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
4.Why is it often hard for management to get serious about quality assurance?
* Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord.
5.Why does software have bugs?
* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
* Programming errors - programmers, like anyone else, can make mistakes.
* Changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ. Also see information about 'agile' approaches such as XP, also in Part 2 of the FAQ.
* Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
* egos - people prefer to say things like:
* * 'no problem'
* * 'piece of cake'
* * 'I can whip that out in a few hours'
* * 'it should be easy to update that old code'
* instead of:
* * 'that adds a lot of complexity and we could end up making a lot of mistakes'
* * 'we have no idea if we can do that; we'll wing it'
* * 'I can't estimate how long it will take, until I take a close look at it'
* * 'we can't figure out what that old spaghetti code did in the first place'
If there are too many unrealistic 'no problem's', the result is bugs.
* Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
* Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
6.How can new Software QA processes be introduced in an existing organization?
* A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
* Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
* For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
* The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.
7.What is verification? validation?
* Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.
8.What is a 'walkthrough'?
* A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
9.What's an 'inspection'?
* An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report.
10.What kinds of testing should be considered?
* Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
* White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
* Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
* Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
* Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
* Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
* System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
* End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
* Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
* Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
* Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
* Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
* Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
* Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
* Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
* Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
* Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
* Failover testing - typically used interchangeably with 'recovery testing'
* Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
* Compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
* Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
* Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
* Context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
* User acceptance testing - determining if software is satisfactory to an end-user or customer.
* Comparison testing - comparing software weaknesses and strengths to competing products.
* Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
* Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
* Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
11.What are 5 common problems in the software development process?
* Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous coordination with customers/end-users is necessary.
* Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
* Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities.
* Stick to initial requirements as much as possible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on.
* Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes if possible to clarify customers' expectations.
12.What is software 'quality'?
* Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's.
* Management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
13.What is 'good code'?
* * 'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:
* Minimize or eliminate use of global variables.
* Use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
* Use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
* Function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
* Function descriptions should be clearly spelled out in comments preceding a function's code.
* Organize code for readability.
* Use whitespace generously - vertically and horizontally.
* Each line of code should contain 70 characters max.
* One code statement per line.
* Coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
* In adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
* No matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
* Make extensive use of error handling procedures and status and error logging.
* For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
* For C++, keep class methods small, less than 50 lines of code per method is preferable.
* For C++, make liberal use of exception handlers.
14.What is 'good design'?
* * 'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include:
* The program should act in a way that least surprises the user
* It should always be evident to the user what can be done next and how to exit
* The program shouldn't let the users do something stupid without warning them.
15.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?
* SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
* CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
* Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
* Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
* Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
* Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
* Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
* Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.
* ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see ISO - International Organization for Standardization for the latest information. In the U.S. the standards can be purchased via the ASQ web site at Welcome to the Quality Press Bookstore!
* IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
* ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
* Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.
16.What is the 'software life cycle'?
* The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
17.Will automated testing tools make testing easier?
* Possibly For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.
* A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.
* Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.
* Other automated tools can include:
* Code analyzers - monitor code complexity, adherence to standards, etc.
* Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
* Memory analyzers - such as bounds-checkers and leak detectors.
* Load/performance test tools - for testing client/server and web applications under various load levels.
* Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.
* Other tools - for test case management, documentation management, bug reporting, and configuration management.
02-06-2007, 07:57 AM #5
Questions on load runner
Interview Questions- LoadRunner
1. What is load testing? Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
3. Did u use LoadRunner? What version? Yes. Version 7.2.
4. Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* <function name>(char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
21. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
22. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
23. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
24. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.
25. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
26. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
27. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
28. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
29. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
30. How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
31. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
32. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
33. What does vuser_end action contain? - Vuser_end section contains log off procedures.
34. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
35. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
36. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
37. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
be reasonable to conclude that the bandwidth is constraining the volume of
38. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
o The number of concurrent Vusers
o The number of hits per second
o The number of transactions per second
o The number of pages per minute
o The transaction response time that you want your scenario
39. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
increases. At 56 Vusers, there is a sudden, sharp increase in the average response
time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
40. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
41. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
42. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
02-06-2007, 07:58 AM #6
Questions on silk
Interview Questions-Silk Test
1. How does the Recovery System Work in SilkTest?
2. What is the purpose of user-defined base state method .?
3. What are the components of SilkTest .?
4. What are the important features of SilkTest as compare to other tools?
5. How to define new class in SilkTest?
6. What is SilkMeter and how does it works with SilkTest .?
04-16-2007, 07:22 AM #7Junior Member Array
- Join Date
- Apr 2007
- Good Answers
- Agreed 0 Times with 0 answers
Recently, I have completed Foundation course in ITIL Service Management. I am a fresher and wants to get a job as a fresher. Can some one help me and let me know the interview question which can might ask by interview panel. If possible, answers to these questions will be greatly appreciated.
04-16-2007, 04:42 PM #8Senior Member Array
- Join Date
- Apr 2006
- Good Answers
- Agreed 0 Times with 0 answers
Are your planning to take up job in testing? If so there are many questions placed in this link with answers by our friends. You could also refer the below link which has huge collection of interview questions and answers. Share here if any other questions you get which would help other members.
01-04-2008, 02:45 AM #9Junior Member Array
- Join Date
- Dec 2007
- Good Answers
- Agreed 0 Times with 0 answers
1. Difference in Testing and QA
2. V & V model
3. Black box & white Box
4. Levels of Testing ( unit, module, integration, system, user acceptance)
5. Types of testing ( regression, retesting, adhoc, load, stress, performance, volume, security, sanity, soap, monkey, exploratory, compaitibility etc.
6.Different SFLC models.
&. what is test plan.
7. what is STLC( software testing life cycle)
8. what is a bug? explain bug life cycle.
9. what is database testing.
10. What should a tester have in himself/herself to be a good tester.( 5 points )
11.BAsic Real time scenarios
a. Test cases of ATM machine(imp)
b. test cases for opening/closing an bank acount
c. test cases for buying a insurance policy
d. test cases for opening yahoo account
e. test cases for a login window.(imp)
f. test cases for a moble phone.
g. test cases for a computer game ( keyboard game )
h. test cases for google search engine(imp)
12. how to estimate the testing task
13. What is Risk analysis
14. What is impact analysis
15. what is Risk matrix
16. What is Traceability Matirx( imp )
17. Black box Vs White box
18.what is SRS
19.what is walkthrough. inspection
20.Do you remember any favourite bug that you had found.
01-22-2012, 03:38 AM #10
in test paln u have to write what are the features to be tested what are the features not to be tested, Risk Assesment,Referneces all u have to include for yahoo what are the scenarios ur having like that u have to first anaylze after that start writing