Adding automatic tests to an ASP.NET MVC application (part 1)

I. Introduction

Soon after software development has emerged, it became obvious that any application beyond a homework needs testing. Testing evolved naturally along with frameworks, both on client side and server side and now we are talking about Test-driven development and Behavior-driven development(e.g. Cucumber). Of course, most efficient tests are automatic ones.

A couple of years ago, we had the opportunity to start a project from scratch (MDW Automatic Testing along with Claudiu) and I said to my self: why not develop a small framework to easily add automatic tests?

Since back then ASP.NET Core was not mature enough to go into production, we stuck to the more solid ASP.NET MVC 5, along with its natural companion, Entity Framework 6. On the client side, we used AngularJS(Angular was not mature enough back then).

So, this article will focus on automatic testing using this tech stack, but most of the concepts also apply to other tech stacks.


II. Unit testing

Simply put, unit testing refers to testing the functions in your code.

Unit tests represent the base of the automated test pyramid and from my personal experience, they are often skipped, because they require a lot of coding and thus more time (also check this nice article about levels of testing). We made a compromise and have chosen to cover critical functionality with unit tests and use integrative tests for the rest.

In order for the code to be unit-testable some tools and design-patterns were used to cover the following concepts:

  • Dependency Injection — we chose Ninject. Of course, there are alternatives, but this seemed to be most versatile. This allows to easily replace a service dependency when needed (e.g. with a mock object)
  • A mocking framework — we chose NSubstitute . This allows to easily to replace classes or functions for unit testing.
  • A unit testing framework — we use NUnit. It integrates nicely in Visual Studio and we use it to run all our tests (including Selenium Web driver ones).
  • Generic repositories — a design pattern that involves creating a generic class that handles basic data operations (add, insert, insert bulk, update, update bulk etc.). Unfortunately, Entity Framework 6 is not very friendly when it comes to dependency injection and mocking, so generic repositories are of good use since that can simply be replaced with some generic in-memory repositories.

NOTE: thankfully, ASP.NET Core 2.0 was built with dependency injection in mind and comes with an in-memory database provider that makes unit tests much easier.

Enough talk, let’s dive into some code.

  1. The generic repositories

All generic repositories implement a single interface (see below). Besides the regular (non-cached) repository and the in-memory one, there is also a “cached” one that is plugged for some entities to avoid database fetches (typically mapped to rarely changed and relatively small tables).

Read the complete article here.

Written by Alexandru Dragan, METRO SYSTEMS Romania

Artificial intelligence will certainly be part of the future world, will we?

terminator

For some time now, artificial intelligence has been revolutionizing the way we see banking, fintech, retail, education and even the medical system. At this point, it is clear for everyone that whatever future we might have, AI will definitely be part of it. The thing we don’t seem to agree on is how good will such technology be for humanity. On the one hand, a lot of experts believe that AI advancements will bring much value and will help humans improve their existence over the next decades. And on the other hand, we have those who worry that, in time, we might start to change our idea of what it means to be human.

In the summer of 2018, a group of technology pioneers, innovators, developers, business leaders and policy makers have debated on the exact topic and reached an unanimous conclusion. They predicted that artificial intelligence will augment human effectiveness, but also threaten human autonomy and capabilities. They also discussed the possibility of systems becoming so smart that they could actually be better than humans even at what is now impossible for AI to do: complex decision-making, reasoning and learning.

In a recent interview, Bill Gates has also declared that “artificial intelligence is both promising and dangerous, like nuclear weapons and nuclear energy”.  However, Microsoft’s co-founder believes that medicine and education can and should be the main areas that benefit from what AI can bring to the table.

The elephant in the room

The matter of whether humans will be replaced or not by machines haunts most of our dreams since AI has become almost ubiquitous in our lives. Nowadays, there’s almost no industry that has not deployed artificial intelligence technology. Banks are using it to detect fraudulent transactions and make predictions, retailers save a lot of time and money by adding AI capabilities for inventory and shelving and there are even hospitals that adopt it to help doctors identify certain diseases. Furthermore, data shows that more than a third of US hospitals actually have at least one robot that can perform surgeries.

With the possibility of machines someday really thinking and acting like (or even better than) us, everything seems to point towards humans becoming obsolete. How not to worry?

FIRST ‘TAKEOVER’ WE NEED TO TACKLE – AI AND UNEMPLOYMENT

Job security has always been one of people’s main worries over the years. And with AI disrupting so many domains, our replacement has become more of a fact than a possibility. Apart from the repetitive and predictable tasks, there are also a number of white collar jobs that might be done (better) in the future by an AI system. Lawyers, doctors, writers or journalists are also at risk of being replaced by machines.

But such technological development will bring so many other changes and innovations that we will not be able to address at once, which means new skills and capabilities will be required from humans. This will lead to new jobs being created, thus adding different opportunities for individuals.  The hypothesis of AI creating more jobs than it replaces is also confirmed by a Gartner study, which predicts that in 2020, AI will create 2.3 million new jobs while eliminating 1.8 million traditional jobs.

Read the complete article here.

Written by Ionela Bărbuță, Strongbytes

A study towards understanding the job titles in a DevOps world

DevOps is an IT related concept heavily debated, marketed, talked-about in the industry for quite some years. What I love most about it is that it continuously improves; it is like a living organism, as any IT company built on people’s creativity, passion and knowledge.

As I see it, if DevOps would be a person, automation would be its heart, communication and knowledge sharing would be its blood, agile product management would be its brain while servant leadership (or even further — transformational leadership) would be its breathing air.

In this article I am going to have a look at the DevOps heart and how could we keep it pumping.

In a DevOps environment, practices such as continuous integration, continuous delivery, continuous deployment, continuous monitoring, continuous testing, self-healing, auto-scaling are a must; and all these can only be achieved by automating workflows, operations, whatever repetitive task that implies human effort.

In order to cover this automation need, several job titles appeared in the market: Build Engineer, Release Engineer, DevOps Engineer, Site Reliability Engineer, [Cloud] Platform Engineer and some other flavors of these ones. Of course, passionate debates and quite very well documented papers appeared on what exactly does it mean one or the other, how do they overlap, how do they complement and in which kind of organizational structure do they fit (if curious about it, please see the references of this article).

The understanding and the usage of these job titles depend also on the geographical location, in direct correlation with how many companies/teams have adopted DevOps (culture, methodology, processes, and tools). As reference, in the 2017 State of DevOps Report done by Puppet & DORA, it is stated that 54% of the software teams have adopted already DevOps in North America, 27% in Europe and Russia and 10% in Asia, so I expect some differences in the maturity level, thus in the job-related titles (implicitly, in roles & responsibilities).

From my observations on the Romanian jobs market, I have built the following picture with regards to these job titles topology:

As a storyline, I would put it like this:

· two ~ three years ago, when I first made an analysis of the market, there were many job requests for Build Engineers, meaning someone with technical expertise in automating the build process, who would be able to implement continuous code integration, as a first step towards building a Continuous Delivery Pipeline (CDP). Specific technical skills that are mostly required for this role: source control management and tools (e.g GIT, SVN), scripting languages for packing the source code (e.g Ant, Maven, Makefile), CDP related tools (e.g Jenkins, Groovy, TeamCity, Artifactory, Nexus); knowledge on CDP workflow and technical components;

· then, for quite a small period of time, I have observed an increase of requests for so-called Release Engineers, from whom the companies demanded the same knowledge as for Build Engineers and, in addition, strong knowledge also on managing environments/platforms, configuration management & deployment automation, agile-specific tooling configuration/ administration/management. They were expected to build a complete, reliable Continuous Delivery Pipeline, connecting all the technical pieces together (e.g. integrating Selenium for test automation, Docker or Cloud Providers SDKs), implementing best practices in the workflow. The term is not that used anymore, at least on the Romanian market. Searching through LinkedIn Jobs, I can observe that also worldwide is not heavily used, in comparison with Build Engineer (a few times more job requests) or DevOps Engineer (which is requested like 10~15 times more). Maybe the word “release” was not that inspiring and everyone in the industry was thinking about the old release policies with long feedback loops and that is why it was more or less dropped off. On the other hand, Google mentions Release Engineer role in “Site Reliability Engineering. How Google runs production systems” book as defining “best practices for using [their] tools in order to make sure projects are released using consistent and repeatable methodologies. Examples include compiler flags, formats for build identification tags, and required steps during a build.”

· There is also an increasing number of requests for [Site] Reliability Engineers (SRE). This is a role launched by Google, heavily sustained by one of the top DevOps researchers, Jez Humble, which is rapidly gaining adoption, in a direct correlation also with Google Cloud Platform increase in the market share. SRE, as Google defines it, is a team with both coding and system engineering skills, which “is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise”. The team is expected to be responsible for the availability, latency, performance, efficiency, change management, continuous monitoring, emergency response, building up strategies for rollbacks, auto-scaling or self-healing. The required technical expertise is referring to performance monitoring tools (e.g DataDog, OMD, Grafana), Linux scripting, programming languages (e.g Go, Python, Java, JavaScript), cloud technologies (e.g Google Cloud Platform, AWS, OpenStack), microservices architecture.

Read the complete article here.

Written by Aura Virgolici, METRO SYSTEMS Romania

Welcome to the retail robot takeover

One of the main areas we’re starting to see the proven power of artificial intelligence is in the retail sector. Whether it’s a robot salesclerk whizzing down the aisles, ready to assist customers in need of directions, or number-crunching systems that make sense of big data rendering it both manageable and actionable to help a company make informed decisions that will positively impact its bottom line. There’s no denying that we’re only beginning to discover the value of what AI can bring to the table when it comes to sales, service and support to enhance the customer experience.

One proven example of the move towards making large volumes of data meaningful within the retail ecosystem is Salesforce Einstein, a robust set of services that brings sophisticated AI activities into Salesforce’s existing popular CRM. With Einstein, retailers can transform business, as usual, helping every department to run efficiently and do what they do best for their valued customers. The company can offer predictions and recommendations based on customer data and develop apps that will transform how shoppers interact with its online and physical stores.

Read more about how AI is impacting e-commerce in this article written by OSF Commerce, an OSF Global Brand.

Why I would not choose Java to run in Kubernetes

Together with the rise of Java, Java Applications Servers (e.g Tomcat, IBM WAS) also came into the game. The idea of an application server was to have one JVM and to run multiple deployments (e.g. jar, ear, war), this way the memory footprint necessary for running lots of processes can be reduced.

Historically, the teams were well-defined in application administrators that could also be responsible with the operations aspects and developers with a focus on developing the business features of their product and not so much on how their code runs in terms of performance, reliability and so on. App admins instead, took a heavy interest in how the code is running on their administered applications so they would have a stable system. As one of them, even with optimized code, I still had a job that killed that Java process on a 24h basis, just to be sure, considering that we had processes that just froze after 1 week.

Then containers were launched and the idea that everything can be packed in images and those images can run anywhere. Kubernetes would be used for orchestration, monolith applications would be migrated to the cloud, by re-writing them in microservices.

The app admins will remain to administrate the monolith and the developers will go and build new small applications based on microservices in Docker.

In some of these new teams, there were no application administrators, to raise awareness on JRE, JVM and the garbage collector related issues, and no one else took care about this aspect. Instead, Kubernetes is used to deploy our newly shiny Docker images. So, there are not too many people to wonder if Java is really suitable for this kind of architecture from a reliability point of view.

As an old app admin to a new SRE position, I learned about Golang, which looks like a much better alternative for microservices.

Meanwhile, all Kubernetes is full of Java code and there are apps that cannot start if they have not a minimum of 200MB of RAM because Java code runs into a container image with its own JWM, JRE, JAR, CVE security patches and so on (all with an Image size 500+ MB).

                                                  scala microservice

During my learning time for Go, I started to play by writing small compiled Go microservices:

  • Static data that have 11MB in image size and 9MB of RAM used
  • DB Command Model 13MB in image size and 23MB of RAM used
                                                     golang static data
                                     golang database command model

(*both Java and Go screenshots are from services that run for at least 1 day).

Read the complete article here.

Written by Ionut Ilie, METRO SYSTEMS Romania

Masterclass: Testing Strategies for Microservices

Date: 12 April 2019, CLUJ
Training fee: €300/participant plus VAT, only 25 seats available

To book your seats, please contact us by email ([email protected]) or phone (+40 741 103 133).

About the training

Software development is trending toward building systems using small, autonomous, independently deployable services called microservices.

Leveraging microservices makes it easier to add and modify system behavior with minimal or no service interruption. Because they facilitate releasing software early, frequently, and continuously, microservices are especially popular in DevOps.

But how do microservices affect software testing and testability? Are there new testing challenges that arise from this paradigm? Or are these simply old challenges disguised as new ones?

Join Tariq King as he describes the pros and cons of testing under the microservices architecture.

Learn how to develop a microservices testing strategy that fits your project needs—and avoids common pitfalls and misunderstandings.

Whether you’re already using microservices or just considering making the shift, come and engage with Tariq as he brings clarity to testing in a microservices world.

The trainer

Tariq King is the senior director and engineering fellow for quality and performance at Ultimate Software. With more than fifteen years’ experience in software testing research and practice, Tariq heads Ultimate Software’s quality program by providing technical and people leadership, strategic direction, staff training, and research and development in software quality and testing practices. Tariq is a frequent presenter at conferences and workshops, has published more than thirty research articles in IEEE- and ACM-sponsored journals, and has developed and taught software testing courses in both industry and academia. His primary research interest is engineering autonomous self-testing systems. He is cofounder with Jason Arbon of the Artificial Intelligence for Software Testing Association.

Date: 12 April 2019, CLUJ
Training fee: €300/participant plus VAT, only 25 seats available

To book your seats, please contact us by email ([email protected]) or phone (+40 741 103 133).

Lunch and Coffee Breaks included in the price of the Masterclass.

Browse more masterclasses here.

The real perils of artificial intelligence

terminator

Why is The Terminator scenario in AI unrealistic? Modern AI focuses on automated reasoning, based on the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans. To think that common algorithms such as nearest neighbour classifier or linear regression could somehow spawn consciousness and start evolving into superintelligent AI minds is farfetched in our opinion.

The idea of exponential intelligence increase is also unrealistic for the simple reason that even if a system could optimise its own workings, it would keep facing more and more difficult problems that would slow down its progress. This would be similar to the progress of human scientists requiring ever greater efforts and resources from the whole research community and indeed the whole society, which the superintelligent entity wouldn’t have access to.

Read more about the doomsday scenarios within the AI field in this article written by Strongbytes.

3 Fintech areas where AI bring the most value

fintech ai

Having discovered the power of artificial intelligence, people all over the world are doing their best to integrate its capabilities into their businesses. According to data from Gartner, by the end of 2020, 20% of citizens in developed nations are expected to use AI for everyday operational tasks.

Customer support revived

Customer-oriented systems such as text chats, voice systems and chatbots are no longer a novelty. They can carry on human-like conversations and deliver expert advice at a low cost. A lot of banks and companies offering financial services have these capabilities in place.

Credit scores and loans

The use of AI in this field brings a major improvement to the decision-making process by turning it into a faster and more reliable one. This translates into allowing more people to apply for a loan or credit, thus increasing their chances of actually getting it.

Fraud detection

Since the advent of electronic commerce and online banking, payment fraud is a constant in our lives. A 2018 report from McAfee shows that the cost of cybercrime currently reaches 0.8% of the global gross domestic product. The most prevalent type of financial cybercrime is credit card fraud, which grows at a fast pace due to the increase in online transactions. But AI tools and machine learning algorithms are quite successful in detecting financial fraud.

Read more about the topic in this article written by our friends from Strongbytes.

The trolley dilemma in AI

trolley dilemma

There is a principle in ethics that forces us to carefully think about the consequences of an action and consider whether its moral value is determined solely by its outcome. The trolley dilemma is a classic experiment developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The situation goes like this: You see a runaway trolley moving toward five tied-up (or otherwise unaware of the trolley) workers on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, on the side track there is a single person just as oblivious as the other workers. The ethical dilemma that arises is if you should pull the lever, leading to one death but saving five? Or should you just do nothing and allow the trolley to kill the five people on the main track?

Ethics and autonomous cars

The fourth revolution, powered by artificial intelligence, is adding cognitive capabilities to everything, and it is definitely a game changer. We are using AI to build autonomous vehicles, to automate processes, jobs and in some cases, even lives. Considering the impact it will have on individuals and also on the very future of humanity, addressing the topic of ethics is a must.

The ethics of automated jobs

Although the Luddite movement ended a long time ago, some people still have a sense of fear and anxiety when it comes to technology and automating jobs. And with the development of AI systems, there’s currently a general debate whether or not the machines will steal all the jobs.

Read more about the topic in this article written by our friends from Strongbytes.

Tech Conferences – why bother?

tech conferences

IT is a man’s world, but IT wouldn’t be nothing without a woman or a girl… Perhaps this is what James Brown would’ve said if he was still alive and working in the tech field.

The Hidden Figures of the Tech Industry

”There is a special place in hell for women who don’t help other women” — Madeleine Albright

And here we are in 2018, still dealing with gender inequality and boys’ clubs around most of the tech industries. This is where communities like Women in Tech enter the stage playing their role by the book. As part of this global movement, Women in Tech Cluj organize monthly meetups during which they discuss trending topics in Cluj, the so-called Silicon Valley of Transylvania.

“Our purpose is to make women’s work known, share our stories on how we keep up and spread opportunities to grow. “ – Women in Tech Cluj

To bother, or not to bother: this is the question

More and more, both men and women in the tech community want to hear a female perspective. After all, introducing diverse viewpoints into any conversation adds a depth that helps us better understand and resolve the problems we all face. Still, when it comes to women speakers, locally, things tend to vary from event to event with 27% female speakers at JSHeroes and only 13% at Techsylvania, the gap between male and female speakers remaining an issue that’s worth our attention.

Read more about how do you choose the right tech events to attend, but also about getting to conferences and becoming a speaker in this article written by Wolfpack Digital.