The Best Software Testing Software

Home Forums Software Testing Discussions The Best Software Testing Software

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #10766
    Ronan Healy
    Keymaster
    @ronan

    Just a simple question really.

    G2 Crowd recently released their survey of the best software testing software based on customer satisfaction combined with other factors to come up with the best software testing tools.

    So going through their list of HP ALM, HP Quality Centre, Applause, Jmeter, Test Rail, Appium, Selenium and more they have awarded the best to HP Quality Centre.

    I wonder is it worth comparing testing tools like this at all? Would you agree with the result. They seem to have left a few tools out of the list.

    #10785
    Alin
    Participant
    @groza-alin88

    Hi Ronan,

    As you said, only some available tools on the market are listed there. It should include also tools developed by other companies like Ranorex or IBM.

    However, I don’t know if such a comparison really works. Some tools are better for GUI testing, others for API. There are also tools with better features regarding integration with other products. Some tools offer various advanced functionalities for version control or bug tracking. So, a company selects a tool based on the features offered by the tool and the project’s needs regarding the testing activities. The idea of making this ranking and naming a tool as “the best testing tool” it does not make too much sense to me.

    I think HP was ranked first because it is well known, it can be used for testing all platforms (web, desktop and mobile) and it includes features for test management activities. I don’t know if it has the largest number of users but it is one of the most popular paid tools available on the market.

    Regards,
    Alin

    #10842
    Ron
    Participant
    @ronp

    I agree with Alin. Perhaps if you sub-categorized the tools based upon their primary purpose/feature it would be easier to rank them. Mobile application automation testing versus manual testing versus defect tracking – something along that line.

    #10853
    Ronan Healy
    Keymaster
    @ronan

    I would agree the list isn’t that comprehensive. It also seems that the tools were ranked in terms of popularity rather than ability.

    Has anyone seem any other attempts at ranking testing tools?

    #10855
    Efstratios
    Participant
    @straris

    @Ronan, you are definitely right on the popularity, there are so many factors that can make a tool the best for one team and really bad for another that it simply doesn’t make sense to compare them.
    Usually companies invest a huge amount of time just researching what is the best tool for their needs.

    #10857
    Aaron
    Participant
    @fijiaaron

    It’s actually sad that HP Quality Center ranks number one among test tools. But it’s believable.

    Quality Center old “enterprise” software that hasn’t really improved much in 10+ year (HP ALM 11 has some improvements like a web service API but not much.) The UI is painful, the feature set is basic, and it’s not very extensible. And it’s horribly over-priced and poorly supported.

    But, there isn’t really a good competitor. Mid range services like TestComplete from Smart Bear are less painful, less expensive, but not much better in terms of features & integrations. I’m actually surprised there isn’t an open source solution in the space.

    Maybe I shouldn’t be surprised though. Open source shops tend towards a different method — using a standard developer toolset to produce test automation and results. This is where Selenium fits in. Using the same programming language & IDE that developers are used to, building test libraries and using continuous integration for reporting test results. Unfortunately, this leaves out business level requirements and traceability.

    BDD tools like cucumber have tried (with limited success) to fill this gap, but don’t really map well to the actual feature development, testing, and release work flow.

    That’s probably the one thing QC has done right, the domain model of requirements, tests, defects, test runs, etc. And in this case, getting the domain model right is apparently what’s most important, even if the actual work flow is painful.

    I’ve written tools to integrate Quality Center with Selenium test results, but it doesn’t really adequately bridge the gap.

    #11006
    mergan
    Participant
    @mergan-velayudan

    @Ronan, I’ve led a couple of large tool evaluation and deployment/transition activities at different organisations (Application Lifecycle Management though) so I’ll just comment on how I’ve approached this subject in the past. I’m the first to admit its not perfect but I usually start with analyst reports to get a lay of the land in a particular domain. And no, you don’t have to pay for these reports. Most times, vendors that do well on the report will provide a free copy. So, for testing, I did a quick search and found Gartner reports for “Magic Quadrant for Integrated Software Quality Suites” (download from SOASTA website for free), Magic Quadrant for Software Test Automation (download from TestPlant website for free) and Gartner Magic Quadrant for Application Security Testing (several download links if you search for it). These analyst reports must be taken with a pinch of salt but they give you a great idea of the big players in a space. Most times, you can easily get a trial license for any of the products to really see what it offers. From my experiences, you can get multiple renewals of the trial license if you find that you need a bit more time so don’t let that bother you too much. The key to take out from an analyst report is not just what the company is doing today but how solid their vision is for the future.

    Instead of relying on the analyst report though, just use this as a basis for the start of your investigations. As I mentioned, my purpose has usually been to plan a switchover of the entire development process from a plethora of small, developer/tester-selected tools to an end-to-end lifecycle management tool (I believe in lifecycle trace-ability ) so I’ve spent considerable time doing research and evaluations (it was my day job!) before we chose a solution. @straris, at the extreme end of the scale, we did a 1-year evaluation of HP ALM (QC+) before deciding it didn’t meet our requirements (we used it on a few live projects). And yes, we had to pay for licenses for this evaluation but we did it with a small project. The benefit of this approach is that it allows you to build up a set of requirements (local support, functionality, ease of use, integrations out of the box, ease of custom integrations, ease of admin, infrastructure requirements, etc.) against which you can objectively measure a variety of tools. Essentially, you can weight the factors based on your organisational needs and goals (Analytic hierarchy process) and score the tools against this weighted criteria. An added benefit is that it also allows you to assess the actual level of support in your region for a specific tool. For us, local support is always top of our list of selection criteria because functionality of the tool is not much help when its not working or when we can’t extend it to meet new requirements.

    So, that’s my two cents on ranking testing tools. However, I can’t leave this topic without asking whether people looking for new tools for testing should rather consider how those testing tools will fit into the ecosystem of tools they already have in the end-to-end product development process. Testing is just one part of the process and the value you can unlock from integrating your lifecycle tools is much higher than the convenience of a cool feature of a testing tool. I’m talking of course of ALM solutions that ensure that everyone from the tester to the developer to the scrum master or project manager or release manager all are on the same page (within the same tool). And before you say that its often not possible for a tester to influence what happens throughout the life cycle, I would beg to differ. In both instances where I eventually led the transition from individual point tools (they solve one problem such as testing) to ALM solutions, I started as a lone dissenting voice against the spaghetti of non-integrated tools but (several years later in both cases), had the pleasure of seeing everyone work in an ALM solution. In the organisations I’ve worked at, testing always gets bashed for being too slow or too reactive – ALM tools allow you to shed light on the entire development process and make the contribution (both good and bad) of your testing process much more visible and comparable to the other processes.

    In many cases, these tools are perceived as being expensive but then so is the cost of not using integrated tools across the lifecycle. You also need to take into account that the purchase price of the tool is often the smallest aspect of the overall cost to the organisation – customizing and maintaining the tool is much more costly in the long run so don’t let this cloud your decision making. However, if you are a small shop with just a couple of testers/developers, then a lot of the tools are also free or nearly free (IBM Jazz for < 10 users for example, Atlassian JIRA for small teams (with integrated plugins), even though its not really an ALM solution). If you think this is going to be a hard sell in your organisation, speak the language that most organisation speak best – money. Put together a return on investment calculator that takes into account what the likely costs of the tool will be (this even applies if you want to replace your testing tool only) and put together a process improvement plan for the problems you think the tool will allow you to solve. Then put actual numbers against the potential savings from these process changes. Most times, you’ll find the process changes pay off the costs of the tools within the first year.

    Hope this all helps, I’m off to work on my ALM tool!

    #14815
    Archana
    Participant
    @archana

    As rightly pointed out earlier, the tools that are being compared are used for different purposes. And the comparison is based on generic factors. I do not see much use in this kind of comparison. A person or organization who wants to decide on the tool to be used, will look for more specific comparison of the relevant tools.

Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.