At the beginning of 2015, me and my co-founder decided to rent a couple of desks at TechHub Bucharest. It was our first time trying out a co-working space and I was skeptical I could make the switch after working from a private office for so long.
Frankly, I wasn’t in the mood to get out of the house.
After securing a seed investment at the beginning of 2014, we had failed to generate enough revenue or raise a follow-on investment. As a consequence, we had to fire the development team in September. Out of a team of 10 people, we were down to two founders. Needless to say, it was one of the most difficult things we’ve ever had to do.
However, like many turning points in our lives, it’s easier to connect the dots looking back.
Knowing what I know now, I can see that moving to TechHub Bucharest was one of the best decisions we ever made. It was one of those moments that started as a snowball and, in time, turned into a small avalanche.
Going through the dip
2015 was about rebuilding.
After hitting rock bottom at the end of 2014, me and my co-founder were in a depressing state.
But the lack of resources isn’t all bad. At that stage, it sparked creativity for us, which kept our startup alive. By autumn, we had identified some opportunities to keep our startup alive: we applied to different programs for female entrepreneurs that allowed us to pitch remotely.
We got accepted into an accelerator in the US and turned it down because it wasn’t a good fit. One of their requirements was to incorporate a new company in the US, and given our shareholders’ structure, approx.. half of the funds provided by the program would have gone to legal taxes and attorneys. In addition, their management team was not willing to negotiate the term sheet.
The education system is clearly one of the fields that should and definitely will continue to change over the next decades to keep up with technological advancements. Hence, it is no surprise that artificial intelligence has found a way to become part of the learning experience. According to data, the AI education market is currently valued at half a billion USD and the amount is set to rise to six billion by 2024.
Integrating artificial intelligence capabilities into the education system brings greater benefits to teacher and student alike, allowing for a more personalized academic path, better results and better prepared individuals. In this context, let’s have a look at some of the advantages of adopting AI in education.
CUSTOMIZED LEARNING EXPERIENCES
Since in the last couple of years the learning process has moved towards the digital world, data about students’ educational progress has significantly increased and it has become easier to manage. Data can now be collected and analyzed, leading to more personalized learning plans.
Hopefully, this will allow teachers and professors to treat students according to their needs and will provide a substantial support to those who have difficulties in keeping the same pace as their colleagues. Detecting students with learning disabilities and addressing them early in their education is much easier now, and gives better results as well.
ACADEMIC DEVELOPMENT AND CAREER PATH PREDICTIONS
Having so much data at hand allows teachers, with the help of machine learning algorithms, to predict students’ future academic development as well as the best career path for each of them. By analyzing previous performance, AI systems will predict the future performance of a student, allowing the Ministry of Education to know how many students are expected to join secondary school and university in a certain interval.
Furthermore, based on how students respond to certain subjects, some changes can be made to the annual academic curriculum. For this, the course content is analyzed using natural language processing (NLP) and learning gaps are identified and addressed. The system can also help discover the best delivery models for students.
But this is not limited to public schools. For instance, online course provider Coursera is already using artificial intelligence to identify gaps in the courses it provides. Other companies, such as Netex Learning, allow educators to design digital curriculum and content across devices, integrating rich media like video and audio, as well as self- or online-instructor assessment.
AUTOMATIC GRADING SYSTEM
Another key benefit for teachers is the fact that artificial intelligence capabilities allow them to automate grading for nearly all kinds of multiple choice and fill-in-the-blank testing. Moreover, experts are trying to come up with more accurate grading systems that could also asses and grade written answers and regular essays. Other similar administrative duties can also be automated, enabling teachers to spend more time focusing on the students, addressing their learning needs instead of losing several hours grading papers.
The idea of having your fridge turn on the AC or the washing machine and then remind you to buy some groceries on the way home doesn’t seem so far fetched. This is thanks to the Internet of Things which promises to completely reshape our lives by translating our physical world into digital signals.
What is IoT and why does it need AI?
The Internet of Things or IoT can be defined as a network of physical objects that use embedded sensors, actuators and other devices that can collect or transmit information about the objects. According to a report from Gartner, by the end of 2020, the number of existing connected devices is expected to rise to over 25 billion. In other words, there will be more than three things connected to the internet for each person on the planet.
The use of smart devices goes beyond having your alarm clock or phone notifying the coffee machine to start brewing coffee for you. For instance, the IoT technology can be applied to transportation networks, the “smart cities” that hopefully will help reduce waste and increase energy saving. Connected devices can also be used in healthcare for patient monitoring. A smart device can gather data about a person’s heart rate, respiratory rate, skin temperature and body posture. All these details can alert doctors to potential health problems before they arise, or provide them with relevant information regarding which treatments could be most effective for their patients, even when their patients aren’t in the office.
If you’re wondering what role artificial intelligence plays in all this, remember that AI excels at identifying patterns and using them to make predictions. All the data that is being collected by smart devices can be analyzed with the help of machine learning models that can afterwards learn to adjust the behavior of IoT services.
Artificial Intelligence of Things
Artificial intelligence and Internet of Things are the perfect mix. Together they have created a new concept called the artificial intelligence of things or AIoT. The purpose of AIoT is to improve human-machine interactions and enhance data management and analytics. Adding artificial intelligence capabilities to Internet of Things will clearly facilitate a better analysis and decision-making progress.
Even though the Artificial Intelligence of Things is a relatively new concept, the combination of AI and IoT brings a whole new range of possibilities. At the same time, technology companies are really open to the idea of leveraging the benefits of AI to add value to IoT systems.
AI technology, more specifically machine learning, is able to automatically identify patterns and detect anomalies in the data that smart sensors and devices collect. Machine learning approaches can make operational predictions up to 20 times faster and with greater accuracy than traditional business intelligence tools. AI applications for IoT systems can also enable companies to avoid unplanned downtime, increase operating efficiency, spawn new products and services, and enhance risk management.
Artificial intelligence is not only changing the products and services we provide nor the way we create these products. It has become quite clear that it is also completely reshaping human behavior and leadership.
Whether we realise it or not, some of the high-level decisions that are being taken by people in key positions, be them organization or nation leaders, are already influenced by the evolution of AI, machine learning, NLP, etc. In 2020, artificial intelligence is expected to create 2.3 million jobs and eliminate 1.8 million ones, according to Gartner. Hence, there is no doubt that we need to rethink the way we lead and evaluate people.
What is the extent of AI’s influence on leadership? How are we all going to cope with these technological and functional changes? Who will lead in the robots era? These are only some of the questions researchers are struggling to find the answer to. At this moment, we have too little evidence to paint a clear picture on this, however, we have some opinions we’d like to share with our readers.
Can a robot be a good leader?
The correct and obvious, at least for us, response is no. We are still trying to deal with people that are not good leaders, and, theoretically, they can achieve the “prerequisites”. A leader needs to be emphatic, creative and really good at reading people, qualities that it is said AI systems cannot acquire. And these are only some of the many skills a leader needs to have, that no machine learning or NLP algorithm could reproduce. So, a robot cannot even be a leader, be it a good one or a bad one.
But it could be a good manager. Apart from a leader that can inspire and share a vision, employees also need someone to assist them, give them clear directions and be there when they have questions. When artificial intelligence is integrated into the management role of a business, it helps speed up the processes, saves time for leaders and even eliminates the middle-management positions that sometimes, instead of making things easier, only add to the existing burden and bureaucracy.
Being able to process large amounts of data in a relatively short interval, a machine can multi-task and do so accurately, without getting tired or listing fragmentation as an impediment for the quality of work. Of course, this can only be applied to routine, predictive and repetitive tasks. But this way, employees can actually invest their time and knowledge into something that will bring more business value and more personal satisfaction. Productivity is also bound to increase because people know that much of the mundane activities they need to do on a daily basis have been automated.
WHAT ABOUT THE DECISION-MAKING PROCESS?
We’ve already established that a robot cannot be a leader, but it doesn’t mean it cannot contribute to the development of human leaders.
We know that leaders need to take a lot of decisions, and some of them during “war” time, when the company is in serious trouble. Not all the decision-making processes run smoothly and several psychological studies have indicated that when business leaders have to make multiple decisions across the day, there is an inevitable dip in energy which increases the likelihood for wrong choices. With an AI system, such risks can be mitigated. If fed with the right data and trained correctly, a machine learning algorithm, for instance, can make an infinite number of accurate decisions.
Fully adopting AI technology for the management department is still in the debating phase. Most companies are currently just trying different solutions that might help them with a small set of tasks. For example, in 2018, an independent workforce solutions provider launched a decision analytics platform which uses machine learning algorithms and predictive analytics to help people understand and interpret data scenarios better. Machine learning cannot replace the pragmatic way that humans think and it cannot make logical correlations, but it can help them make more informed, data-driven decisions.
There’s one industry that artificial intelligence can greatly enhance, however, it is also the one that can suffer the most from any misuse of the technology. It’s been a while since researchers have started to weigh the pros and cons of using AI in healthcare and the case is still controversial.
On the one hand, the introduction of robots and artificial intelligence in the medical system can speed up the processes, free the doctors of dull and repetitive duties so they can focus their entire attention on patients and, most importantly, can help people stay healthy.
On the other hand, AI systems have access to a lot of sensitive data about people’s medical history. A group of researchers at Harvard and MIT is making us aware that these systems may have unintended consequences. According to the paper they published on the online media outlet Science, there’s always the possibility of adversarial attacks, which constitute manipulations that can change the behavior of AI systems using small pieces of digital data. This can lead to patients being misdiagnosed, which eventually might affect people’s lives and their survival chances.
The good about AI in healthcare
The hype around artificial intelligence and the great benefits that it brings in many domains have allowed for its adoption in the healthcare system as well. There is even a debate going on in the medical world whether human physicians will be totally replaced by AI doctors sometime in the future. But that’s a different discussion, there’s a long way to go until we get there. Nowadays, AI systems can assist physicians in making better clinical decisions and, in some cases, can even replace human judgement in certain functional areas of healthcare (e.g. radiology).
DATA MANAGEMENT AND DIAGNOSIS
One area that AI can help with in healthcare is handling the medical data. So far, all the information about patients is included in medical notes, electronic recordings from medical devices, physical examinations and clinical laboratory and images. With the help of machine learning or NLP (Natural Language Processing), relevant medical information is unlocked faster. The information can then be used for making better clinical decisions.
For instance, Google Deepmind Health is used to mine medical records in order to offer better and faster health services. It is able to process huge amounts of medical information within a matter of minutes.
With regard to diagnosis, until now, the most common use cases we’ve seen in healthcare are in the field of neurology, cardiology and oncology. As an example, Manipal Hospitals in India have deployed IBM Watson for cancer care. Nevertheless, progress has been registered also in diagnosing eye diseases. Google, in coordination with Moorfields Eye Hospital NHS Foundation Trust, is aiming to improve eye treatment using machine learning.
HANDLING REPETITIVE AND DULL TASKS
Being a doctor is great, but it also requires a lot of time and effort doing things that bring less value both to the patient and the field of medicine. Entering patient data into a computer or filling in paperwork are important and need to done, yet it keeps the medical staff occupied and unable to attend patients and their needs. Artificial intelligence and robots, in particular, can handle these tasks much faster and much better. This way, doctors can spend more time thinking of alternative treatments, identifying better options for the patients and even talking to them, understanding their needs or comforting them. Taking x-ray scans, CT scans or analyzing tests can also be handled by robots that can do it faster and with better results. AI pattern recognition software can detect anomalies in scans, helping doctors carry out earlier diagnosis and improve patients’ survival chances.
According to IBM, its Medical Sieve exploratory project plans to build the next-generation “cognitive assistant” which is capable of analytics and reasoning with a vast range of clinical knowledge. Medical Sieve can help in making the clinical decision regarding cardiology and radiology.
AI chatbots are not a novelty, they are used in numerous industries and the healthcare system makes no exception. Using NLP and deep learning (DL) to understand patients’ requests and improve their responses with each interaction, AI receptionists can perform a lot of tasks in a shorter amount of time and with greater accuracy than a human. They are programmed to asses the level of urgency and importance of a request and are able to transfer them to a human when the situation requires it.
Artificial intelligence adoption across industries is skyrocketing. A McKinsey Global survey revealed that 47% of the interviewed companies have embedded at least one AI capability in 2018, compared with only 20% in the previous year. Furthermore, 30% said they are piloting AI and 71% stated they expect the overall investment in AI to grow in the coming year.
One domain in which we can witness an increase in the number of AI-driven technologies used in the business processes is retail. A report from CB Insights shows that between 2013 and 2018, retail artificial intelligence startups raised $1.8 billion across 374 deals.
Artificial intelligence helps the retail sector tackle and solve a series of frustrating problems such as inaccurate inventory, overstretched and undertrained store associates, product placement and sub-optimal pricing. By processing huge amounts of information from different sources, AI systems are able to faster and better identify viable solutions.
In support of this idea is another study released by IBM earlier in 2019, which identifies six ways the retail industry plans on using AI, based on respondents’ feedback:
· Supply chain planning (85%)
· Demand forecasting (85%)
· Customer intelligence (79%)
· Marketing, advertising, and campaign management (75%)
· Store operations (73%)
· Pricing and promotion (73%)
Even with such advent on the e-commerce front, brick-and-mortar stores are still here and are looking for innovative ways to remain at the forefront of the sales chain. So, AI can also play a crucial part in helping bridge the gap between virtual and physical sales channels.
From reducing shipping costs and improving supply chain efficiency to personalizing shopping experiences and helping workers acquire new skills, AI technologies allow retailers to compete in the 21st century economy and better serve their customers. This is what the future of retail looks like. — Mark Mathews, NRF vice president of research development and industry analysis
A recent report revealed that 40% of companies claiming to be in the AI business do not actually have any real artificial intelligence capabilities.
The same study found that when companies did include artificial intelligence and machine learning into their businesses, the use cases were quite ordinary. Some of the most popular ways start-ups used AI included chatbots (26%) and fraud detection (21%).
These findings prove that artificial intelligence is currently the most used and, sadly, the most misused concept in technology. Companies are in a hurry to jump into the AI bandwagon without fully comprehending what are the challenges or whether it is possible to add such features to their services.
Nevertheless, it is clear that artificial intelligence has started to generate good results and provide added value to businesses in different areas, including retail and fintech. The real question, in this case, is whether companies are actually ready to leverage AI.
The current technological context is creating excellent conditions for businesses to start fueling AI operations. Humanity’s access to better computing capabilities has grown exponentially, huge amounts of data have become available globally and computer scientists have learned how to improve their algorithms. All of these are making it possible for AI technology to power real-world applications.
But no great advancement comes without its shortcomings. There are a number of challenges businesses and individuals alike need to overcome in order to benefit from everything that artificial intelligence has to offer.
FEAR AND RESISTANCE TO CHANGE
Since the beginning of the world, humanity has gone through considerable transformation. Yet, this has not made us less resistant to change. On the contrary, people seem to have become more fearful with any new technological development. If in the 19th century, the Luddites believed that machines will steal their jobs, nowadays, people envision an apocalyptic scenario in which robots take over the world and deem humans obsolete.
In this case, companies trying to add AI capabilities might encounter resistance not only from their customers who might not adopt their AI-based products or services, but also from their employees. For instance, several Google employees have resigned from their positions because they did not agree with the company’s AI plans.
Even famous figures such as Elon Musk, Mark Cuban and Steven Hawking have argued that AI could pose an existential threat to humanity. In an attempt to reduce the anxiety related to weaponized robots, the leading minds of the tech world, including Elon Musk, have made a commitment not to use artificial intelligence to develop weapons.
LACK OF KNOWLEDGE OR BUDGET
Artificial intelligence is not a new concept, however, it is only recently that it has gained considerable ground. This means that the skills and knowledge needed to implement AI processes and drive organizational change are just starting to be in demand. A lot of companies, as well as individuals, are rushing to be part of the movement, but they are not trained well enough to deal with it. Moreover, the ones that are equipped to handle this are inevitable drawn by the world’s tech giants that invest in AI, such as Google, Facebook, Amazon and Microsoft, to name but a few. Hopefully, this will be solved once the education system will start preparing the future generations for the AI era.
Limited funding could constitute another major challenge when it comes to AI adoption. Stakeholders might choose not to allocate the necessary capital if they are not enthusiastic enough about the initiative. Nevertheless, as artificial intelligence continues to make its mark in major domains, more and more investors are willing to fund companies that promise to revolutionize the way they do business.
Soon after software development has emerged, it became obvious that any application beyond a homework needs testing. Testing evolved naturally along with frameworks, both on client side and server side and now we are talking about Test-driven development and Behavior-driven development(e.g. Cucumber). Of course, most efficient tests are automatic ones.
A couple of years ago, we had the opportunity to start a project from scratch (MDW Automatic Testing along with Claudiu) and I said to my self: why not develop a small framework to easily add automatic tests?
So, this article will focus on automatic testing using this tech stack, but most of the concepts also apply to other tech stacks.
II. Unit testing
Simply put, unit testing refers to testing the functions in your code.
Unit tests represent the base of the automated test pyramid and from my personal experience, they are often skipped, because they require a lot of coding and thus more time (also check this nice article about levels of testing). We made a compromise and have chosen to cover critical functionality with unit tests and use integrative tests for the rest.
In order for the code to be unit-testable some tools and design-patterns were used to cover the following concepts:
Dependency Injection — we chose Ninject. Of course, there are alternatives, but this seemed to be most versatile. This allows to easily replace a service dependency when needed (e.g. with a mock object)
A mocking framework — we chose NSubstitute . This allows to easily to replace classes or functions for unit testing.
A unit testing framework — we use NUnit. It integrates nicely in Visual Studio and we use it to run all our tests (including Selenium Web driver ones).
Generic repositories — a design pattern that involves creating a generic class that handles basic data operations (add, insert, insert bulk, update, update bulk etc.). Unfortunately, Entity Framework 6 is not very friendly when it comes to dependency injection and mocking, so generic repositories are of good use since that can simply be replaced with some generic in-memory repositories.
All generic repositories implement a single interface (see below). Besides the regular (non-cached) repository and the in-memory one, there is also a “cached” one that is plugged for some entities to avoid database fetches (typically mapped to rarely changed and relatively small tables).
For some time now, artificial intelligence has been revolutionizing the way we see banking, fintech, retail, education and even the medical system. At this point, it is clear for everyone that whatever future we might have, AI will definitely be part of it. The thing we don’t seem to agree on is how good will such technology be for humanity. On the one hand, a lot of experts believe that AI advancements will bring much value and will help humans improve their existence over the next decades. And on the other hand, we have those who worry that, in time, we might start to change our idea of what it means to be human.
In the summer of 2018, a group of technology pioneers, innovators, developers, business leaders and policy makers have debated on the exact topic and reached an unanimous conclusion. They predicted that artificial intelligence will augment human effectiveness, but also threaten human autonomy and capabilities. They also discussed the possibility of systems becoming so smart that they could actually be better than humans even at what is now impossible for AI to do: complex decision-making, reasoning and learning.
In a recent interview, Bill Gates has also declared that “artificial intelligence is both promising and dangerous, like nuclear weapons and nuclear energy”. However, Microsoft’s co-founder believes that medicine and education can and should be the main areas that benefit from what AI can bring to the table.
The elephant in the room
The matter of whether humans will be replaced or not by machines haunts most of our dreams since AI has become almost ubiquitous in our lives. Nowadays, there’s almost no industry that has not deployed artificial intelligence technology. Banks are using it to detect fraudulent transactions and make predictions, retailers save a lot of time and money by adding AI capabilities for inventory and shelving and there are even hospitals that adopt it to help doctors identify certain diseases. Furthermore, data shows that more than a third of US hospitals actually have at least one robot that can perform surgeries.
With the possibility of machines someday really thinking and acting like (or even better than) us, everything seems to point towards humans becoming obsolete. How not to worry?
FIRST ‘TAKEOVER’ WE NEED TO TACKLE – AI AND UNEMPLOYMENT
Job security has always been one of people’s main worries over the years. And with AI disrupting so many domains, our replacement has become more of a fact than a possibility. Apart from the repetitive and predictable tasks, there are also a number of white collar jobs that might be done (better) in the future by an AI system. Lawyers, doctors, writers or journalists are also at risk of being replaced by machines.
But such technological development will bring so many other changes and innovations that we will not be able to address at once, which means new skills and capabilities will be required from humans. This will lead to new jobs being created, thus adding different opportunities for individuals. The hypothesis of AI creating more jobs than it replaces is also confirmed by a Gartner study, which predicts that in 2020, AI will create 2.3 million new jobs while eliminating 1.8 million traditional jobs.
DevOps is an IT related concept heavily debated, marketed, talked-about in the industry for quite some years. What I love most about it is that it continuously improves; it is like a living organism, as any IT company built on people’s creativity, passion and knowledge.
As I see it, if DevOps would be a person, automation would be its heart, communication and knowledge sharing would be its blood, agile product management would be its brain while servant leadership (or even further — transformational leadership) would be its breathing air.
In this article I am going to have a look at the DevOps heart and how could we keep it pumping.
In a DevOps environment, practices such as continuous integration, continuous delivery, continuous deployment, continuous monitoring, continuous testing, self-healing, auto-scaling are a must; and all these can only be achieved by automating workflows, operations, whatever repetitive task that implies human effort.
In order to cover this automation need, several job titles appeared in the market: Build Engineer, Release Engineer, DevOps Engineer, Site Reliability Engineer, [Cloud] Platform Engineer and some other flavors of these ones. Of course, passionate debates and quite very well documented papers appeared on what exactly does it mean one or the other, how do they overlap, how do they complement and in which kind of organizational structure do they fit (if curious about it, please see the references of this article).
The understanding and the usage of these job titles depend also on the geographical location, in direct correlation with how many companies/teams have adopted DevOps (culture, methodology, processes, and tools). As reference, in the 2017 State of DevOps Report done by Puppet & DORA, it is stated that 54% of the software teams have adopted already DevOps in North America, 27% in Europe and Russia and 10% in Asia, so I expect some differences in the maturity level, thus in the job-related titles (implicitly, in roles & responsibilities).
From my observations on the Romanian jobs market, I have built the following picture with regards to these job titles topology:
As a storyline, I would put it like this:
· two ~ three years ago, when I first made an analysis of the market, there were many job requests for Build Engineers, meaning someone with technical expertise in automating the build process, who would be able to implement continuous code integration, as a first step towards building a Continuous Delivery Pipeline (CDP). Specific technical skills that are mostly required for this role: source control management and tools (e.g GIT, SVN), scripting languages for packing the source code (e.g Ant, Maven, Makefile), CDP related tools (e.g Jenkins, Groovy, TeamCity, Artifactory, Nexus); knowledge on CDP workflow and technical components;
· then, for quite a small period of time, I have observed an increase of requests for so-called Release Engineers, from whom the companies demanded the same knowledge as for Build Engineers and, in addition, strong knowledge also on managing environments/platforms, configuration management & deployment automation, agile-specific tooling configuration/ administration/management. They were expected to build a complete, reliable Continuous Delivery Pipeline, connecting all the technical pieces together (e.g. integrating Selenium for test automation, Docker or Cloud Providers SDKs), implementing best practices in the workflow. The term is not that used anymore, at least on the Romanian market. Searching through LinkedIn Jobs, I can observe that also worldwide is not heavily used, in comparison with Build Engineer (a few times more job requests) or DevOps Engineer (which is requested like 10~15 times more). Maybe the word “release” was not that inspiring and everyone in the industry was thinking about the old release policies with long feedback loops and that is why it was more or less dropped off. On the other hand, Google mentions Release Engineer role in “Site Reliability Engineering. How Google runs production systems” book as defining “best practices for using [their] tools in order to make sure projects are released using consistent and repeatable methodologies. Examples include compiler flags, formats for build identification tags, and required steps during a build.”