Yes, programming software can partially increase your IQ, since it directly benefits the logical-mathematical and creative intelligence, which are evaluated in the IQ tests.
However, it is important to bear in mind that developing software will favor our cognitive abilities as long as they are fostered in an environment of good practices and continuous learning.
When we already have some experience developing software projects, we have already built a mental model of what an ideal software project could be. But, which way to go to develop a software project correctly?
It may seem not very complicated, because today we can find abundant information on the internet about methodologies for the development of projects, such as SCRUM or Extreme Programming. Both provide us with guidelines that we must follow during the development cycle of a project, to guarantee organization, observation, time estimation and / or teamwork.
From our point of view:
Getting organized (to be highly productive) can be much more difficult than learning to program (for those who are just starting out), or mastering a new technology, since the really difficult thing is to develop projects in a structured, precise and above all, easy way to read.
Albert Einstein once said:
If you can't explain it simply, you don't understand it well enough. Albert Einstein
This applies to any area of learning, for example, specifically in software development, if we are not able to simplify our code, and organize it in such a way that a third party can understand our project, we are not doing it well.
In this sense, through practice, we must achieve shape our way of thinking and doing things. In addition, take into account guidelines and guidelines provided by different frameworks, methodologies and / or recommendations of experts applied to our projects. However, it is neither advisable nor efficient to follow such guidelines to the letter, since we must be flexible and not strict.
Next, I will describe 2 good software development practices that you must take into account when carrying out your projects and that I am sure will end up changing your way of thinking.
Make your code as simple as possible!
It is not news to say that one of the biggest problems in computing is complexity. Therefore, simplicity is perhaps the most important and valued quality in the world of software.
Over time, computers have become indispensable in our lives, and have caused a very important change in society. In short, computers are useful because they allow us to do more with less, that is, to do many tasks using less human resources.
Let's imagine that a person wants to do all the operations that a computer does during a year. Probably, it would take the years that he has left of his life, and that is that the real value of a computer is its speed and accuracy. And that's great!
However, it could not be so perfect, because computers have a major flaw: they always have flaws. Perhaps we have not yet been aware of the number of times that they usually have a software defect. If anything that is used frequently was as faulty as a computer, I'm sure we would have gotten rid of it by now.
Most, if not all, of the people I know experience at least 1 damage per week, if not more. I could say that at least once a week I have suffered some kind of failure or I have found out that a friend or coworker has gone through the same thing, that for about 10 years.
If we count, there are 480 different forms of breakdowns just in my experience. And that's not cool.
When it comes to software, there is only one reason: bad programmers.
About 5 years ago I had a suspicion that the reason was bad programmers; however, I was not very sure. Now, with a few more years of experience in the IT field and having consulted many experts through their publications, I no longer have doubts.
I can fully say that bad programmers are to blame for countless computer failures.
It seems a bit unfair to blame software programmers, even more so when the vast majority of people I know who are dedicated to high-level software development are professionals who have quite developed logical thinking.
If the vast majority of programmers are quite logical people, why is there software with so many bugs? The main reason for computer errors is the COMPLEXITY.
Building a computer is probably the most complex process I know of, because for every second that elapses, millions of tasks can be executed. In addition, it has thousands of parts that must work in sync. The operating system that a computer uses alone consists of tens of millions of lines of code. Only Windows 10 has more than 4 million files and more than half a million folders. Proof of this is the following capture:
To give you an even more complete idea, below I detail the number of lines of code per operating system:
|Operating system||Lines of code|
|Linux 3.1 Kernel||15 millions|
|Windows XP||40 millions|
|Windows 7||40 millions|
|Windows Vista||50 millions|
|Debian 5.0 (base code)||67 millions|
|Mac OS X «Lion»||85 millions|
I leave you an additional information:
Facebook has approximately 61 million lines of code and Google has around 2 billion lines of code. Of course, the large number of services offered by Google justify it.
The software on which a computer works is so complex that probably no person will be able to understand the code in its entirety.
Therefore, programming must exist in an environment where it seeks to reduce complexity and thus achieve simplicity. In this way, we ensure that any programmer without extraordinary talents has the ability to continue working on an application. Otherwise, the code could reach such a high level of complexity that it would be almost impossible to work on it.
In short, that's what programming is all about: "Reduce complexity to simplicity."
A good programmer creates things that are easy to understand, maintain, and easy to find bugs. But, don't confuse simplicity with fewer lines of code or no longer using modern technologies. Sometimes simplifying your code can increase your lines of code, just make sure to always document it.
In general, more advanced or modern technologies naturally tend towards simplicity. You just have to learn how to use them correctly, which is often a challenge.
In general, we think that programming in a simplified way will take us more time than doing it quickly. For example, when we have to fulfill some tasks of our work, we generally try to do it quickly, without stopping to think and plan. We couldn't be more wrong!
It is more efficient to spend more time thinking about the problem in search of maximum understanding and in this way be able to propose a simplified solution, than to start writing a solution quickly and then stop to realize that the implementation became unnecessarily complex.
You just have to look around you and realize the great problem that COMPLEXITY has become in software programs.
There are many applications that have become stagnant when trying to add new functionality to the horrible, huge and complex monster of code that they have become.
If you want to know more about the fundamentals and ways of simplified programming, I recommend reading the following book. I loved it!
Always run tests. They are not optional!
In reality, they should never have been an option, however, many programmers still develop applications without using any type of software testing, since code errors are reported by the end customer. This is usually the case for the average freelancer.
Programming without testing is like driving without a seat belt or doing trapeze stunts without a safety net. Currently, the good practice of always testing software remains to be adopted.
Let's review some antecedents that occurred due to the absence or incorrect use of software tests that have caused millions of dollars in economic losses and in some cases have cost the lives of dozens of people.
I am sure that the events mentioned below will make you reflect and give much more importance to software testing.
It happened in 1983 when the Soviet Union's missile detection alert system reported that the United States had launched 5 missiles and they were underway.
Fortunately, the person in charge by intuition and / or criteria did not order an immediate attack in response, since they considered the attack strange because it was out of context and also because of the number of missiles, since they were not those commonly used for a surprise attack.
Hours later, it was confirmed that everything was caused by a missile radar system error. An error somewhat difficult to detect at the time, as the system confused the reflection of the sun in the clouds in a certain position with missiles. For a little and ends up starting the Third World War. However, it might have been avoided after doing a thorough job of:
Both are practices that are part of software testing.
Occurring in 1962 and causing the loss of approximately $ 18.5 million, Mariner I was the first mission in the Mariner program to attempt to fly over Venus, sadly without success.
293 seconds after taking off, a software bug was found. This bug diverted its trajectory. Seconds later a command had to be sent to destroy it, and thus prevent its fall from causing further damage.
The error was later determined: A formula in the code that was programmed incorrectly.
You will ask yourself, what is a radiotherapy linear accelerator? Linear accelerators are machines that emit X-ray beams aimed at a tumor from different angles. The great thing is that these devices are able to customize the X-rays to fit the shape of the tumor without affecting the surrounding area.
Between June 1985 and January 1987, the Therac-25 produced by AECL (Atomic Energy of Canada Limited) was a participant in at least 6 accidents and 3 deaths from having received a radiation overdose.
After the investigations, it was concluded that the main causes of Therac-25 accidents were as follows:
And if that wasn't enough, the software on which the Therac-25 ran was developed in a way where it was almost impossible to identify and fix bugs or errors automatically.
Other causes found:
Believe it or not, the Yorktown Ship, a warship that won countless awards for its excellence in combat and especially for its technological equipment, ended up being towed due to a software error.
In September 1997, a crew member entered a zero in a database field causing the system to execute a division by zero internally, which caused bugs, eventually generating a buffer overflow and finally a failure in the ship's propulsion systme.
In 1993, Intel released a new processor with miscalculations. Although they were very difficult to notice, because to be able to see the error you had to execute operations that require a fairly exact result.
Despite this, Intel had a loss of approximately $ 350 million, not counting the damage to its image that could hardly be quantified.
The 5 cases that I just mentioned to you are just some of the countless computer errors that have unfortunately cost human lives and are clear evidence of the need to write correct software.
Software engineers, as well as structural engineers or civil engineers, should be able to demonstrate by a method, the reliability and fulfillment of required functionalities.
As you may have noticed, testing is a fundamental part of any project. They are not optional!
According to Ilene Burnstein in her book: “Practical Software Testing”, software testing has 3 main processes:
If you don't take the time to build your test cases, you are not doing it right. It is important to build test cases with various scenarios that simulate as many cases as possible. To our bad luck, it is impossible to simulate 100% of the scenarios.
Well said Edsger Dijkstra:
The test can show the presence of errors in a program, but not their absence Edsger Dijkstra (Turing prize in 1972)
But just as there are many cases of computer errors caused by incorrect test execution, there are also success stories that are worthy of a prize.
Today it is possible to develop software as reliable as any other product, even more so with the increase in automation capacity.
Line 14 of the Paris metro is fully automated. The trains are driverless and run by software. This train line was put into operation in 1998.
Although it is true, its perfection cannot be guaranteed, dozens of years have passed and no faults have been detected, thanks to exhaustive testing work, which ended up executing around 86 thousand instructions in the testing process.
In general, rigorous testing processes that can guarantee flawless software are required in some countries only for systems that can cause human losses.
The vast majority of software companies refuse to implement such rigorous testing processes due to the high cost that this implies and also because it is difficult to find professionals dedicated to testing with sufficient training and experience for such a task, since the implementation of the Software testing processes can cost the same or more than the development of the software itself.
Of course, "You don't learn until your turn", as is the case with INTEL, which had to lose around $ 350 million due to a calculation error in its processor software. Mistake that led it to become one of the IT companies with the highest budget for software testing research.
The path to developing a software project consists of 3 well-defined stages:
In the validation stage, the engineers in charge make sure that the software complies with what is planned in the specifications, the way they do it is by "testing".
Once the software has been validated, it is assumed that it meets all the specifications and the validation cycle is repeated for the last time in order to verify correct validation.
What is described above has a big problem of inconsistency, I'll explain it to you below:
The most important problem with the current validation method "testing" is that it does not ensure that the software complies with what is indicated in the specifications. This is because the drafting of the specifications is done in natural language, with terms that will always tend to individual interpretation, which generates ambiguities that will surely be noticed at the end of the project.
A second problem is that you never get to test every possible case. Suppose we have a small software that:
It would be impossible to test all possible cases since the input data is infinite. That is why the software tests are run only on a sample of selected cases, which are sometimes very small samples. The reason for these bad practices is justified by financial and time constraints. In conclusion, we cannot say that a software is correct after having completed the testing phase, as there is insufficient evidence for such a claim.
If we try to be logical (as we should be if we are dedicated to the field of computing), by affirming that a software is correct after having completed the testing phase and not having found any error, we are incurring what is called: “Call to ignorance fallacy”.
In logic, an argument ad ignorantiam, or argumentum ad ignorantiam, also known as a call to ignorance, is a fallacy that consists in sustaining the truth (or falsity) of a proposition claiming that there is no proof to the contrary, or else claiming the inability or refusal of an opponent to present convincing evidence to the contrary. This impatience with ambiguity is often criticized with the phrase: "the absence of proof is not proof of absence" that is, this fallacy is committed when the truth or falsehood of a proposition is inferred based on the existing ignorance about it. Call to ignorance fallacy
As you may have noticed, we make many mistakes when applying software tests in a conventional way and many times we are not aware of the costs that this generates.
Just to give you an idea:
As we have already seen, one of the main problems is the ambiguity of the specifications that, when expressed in natural language, lack accuracy and logical mathematical sense. One solution is to use formal language, where there is no room for ambiguity.
By using a formal method for the development of our software projects, the certainty of properties and / or functionalities of the software is guaranteed through deduction, in other words, through mathematics (p -> q).
This formal method requires much more time and budget for its preparation, since much more precision is needed in the elaboration of specifications, which in turn must be purged of ambiguities so that when creating code it can demonstrate its reliability based on its specifications.
It seems quite far from the reality to reach this limit of demand to create quality software. However, currently there are companies that base their systems on formal development, usually with companies that are dedicated to critical areas where a small error can mean the loss of human lives instantly.
However, on a small scale, such a level of demand is not profitable; at a minimum we must comply with conventional tests.
Below I share a series of publications of good practices that you should not miss if you are looking to develop quality projects.
Learning is like fuel for our brain.
In college or university we have sacrificed continuously studying and none of that has been enough, because every minute that passes new information comes out regardless of the career you have studied.
There will always be something new to learn.
Something to keep in mind is that when we are learning, our brain is affected by changes in neural structures.
Modern research has shown that the brain has the ability to permanently change and deform (plasticity), and not only in children but also in adults.
These changes in the brain can be caused by good practices of continuous learning, restructuring synaptic connections and sometimes creating new ones.
Formerly it was believed that the bigger or heavier the brain of a person, the more intelligent it was. However, recent studies have determined that people with a higher IQ have a less dense neural network but at the same time much more organized.
For this research, the IQ has been calculated on the following factors:
Here is a little more information about the research in question:
The team led by Erhan Genç analyzed the brains of 259 men and women between the ages of 18 and 40, and in good health, in order to measure dendrites in the cerebral cortex, that is, extensions of nerve cells that Cells used to communicate with each other in the performance of intelligence.
Prior to the study, all participants underwent an IQ test. After studying dendrites, it was determined that the higher the IQ, the fewer dendrites there are in the cerebral cortex.
In other words, it was concluded that smarter people not only have more neurons but also have fewer dendritic connections between neurons at the time of cognition. Which means they have a less dense neural network.
The studies were validated with a sample of 500 people and the same conclusions were reached.
Erhan Genç, lead author of the study concluded:
Intelligent brains are characterized by a thin but highly efficient neural network. This helps to achieve a high level of thinking while minimizing neural activity. Erhan Genç
As I have already mentioned in previous paragraphs, programming affects the way of thinking of those who practice it, in that sense it directly influences our mental abilities.
But in what way does it do it? Let's see.
A programmer thinks very differently from others, because in general they tend to be more logical and more rational than the average, although not necessarily.
Since we decided to learn to program, we must choose which language to start with. Although such a choice is not entirely true, because, in general, the vast majority of those who dedicate ourselves to the world of software development choose our first language without any experience or have been subjected and practically forced to start with an imposed programming language. by a teacher, either in college and / or university.
However, such limitations are less and less frequent due to the amount of information that we can find on the internet and the high incentive and promotion of self-taught learning.
The paradigms of programming languages have already shaped many minds, in some cases with more limitations than in others depending on the language with which it was started. By this I do not mean that your first language defines your success or failure, but I do mean the paradigms with which one starts in the world of programming insert patterns in our thinking.
If you learn to program with COBOL, FORTRAN or PASCAL it does not mean that you are doomed to failure. However, the incompatibility with modern technologies and the lack of libraries or functions limit you in learning and expansion.
Nor do I mean to imply that programming languages over 50 years old are bad.
Many systems designed for the operations and transactions of banks, pension fund managers and insurers continue to use COBOL. And it looks like they will continue to use it for many years to come.
I mention some facts that, as incredible as they seem, are all true.
75% of business data is processed in COBOL (Source: Gartner).
There are 180 billion to 200 billion COBOL lines in use worldwide (Gartner).
15% of new applications are written in COBOL (Gartner).Gartner Group
And how expensive would it then be to migrate from COBOL to modern technology systems?
Replacement costs for COBOL systems, estimated at $ 25 per line, are in the hundreds of billions of dollars Tactical Strategy Group
Well said Bill Curtis:
Banks should stick with the old COBOL applications since they do not have the security and development problems that appear with new languages such as Java. Bill Curtis, CAST COO
I'll mention you below 3 ways programming affects your brain:
The programming language with which we started is nothing more than a tool, which is accompanied by paradigms and idioms that directly influence your way of thinking. Not for nothing, Edsger
Dijkstra, one of the pioneers in the establishment of distributed programming said:
The tools we use have a profound (and devious) influence on our thinking habits and, therefore, in our thinking skills. Edsger Dijkstra
Now that you know how important the programming language we start with is and in general all the set of tools we use when programming, I advise you that the first thing you take into account when choosing your first programming language is your comfort.
If you are just starting out, don't get carried away by money. It is true that there are programming languages that are better paid than others, but money should not be your goal. If so, I could advise you to start programming with COBOL, PASCAL, FORTRAN, languages that have very little documentation and that currently there are very few that practice it, which is why they are very well paid where they are required.
In reality, dedicating yourself to software development not only brings benefits to your thinking habits and cognitive skills, it can also ensure a more than stable economic future, as it is a very well paid sector that is currently growing.
Today is the best time to start. Let's see why:
According to the Economic Commission for Latin America and the Caribbean (ECLAC), Latin American countries will begin growth after the economic recession of 2020.
A growth of 3.7% is estimated for 2021, where the main protagonists will be those who are dedicated to the digital world.
As we have already mentioned, learning has positive effects on the brain. In this sense, programming counts as a mental exercise that directly favors the brain.
Let's review some background that confirms the benefits of programming to brain health:
In 1991 an investigation studied the effects of computer programming on cognitive outcomes and determined that students in programming-related areas score 16 percentile points higher than average on IQ tests.
Another larger study in 1999 ended up confirming that intellectually engaging activities serve to buffer individuals against cognitive decline.
Later in 2009, a study found that people who engage in brain-stimulating activities in later years can lower their risk and even delay the onset of Alzheimer's and other types of dementia.
In 2014 a study titled “Understanding Source Code Understanding with Functional Magnetic Resonance Imaging” used functional magnetic resonance imaging scans to observe brain activity as the programmers tried to work and understand bits of code.
It was concluded that 5 areas of the brain were involved:
We must bear in mind that the participants were subjected to review a 20-line code snippet, which is not a great challenge. And that is why no activity was detected in areas of the brain related to mathematical calculations.
What could be noticed was the high intervention of parts of the brain that are normally associated with language processing, memory and attention.
Programming is the closest thing to having superpowers. Drew Houston, CEO of Dropbox
ONLINE INTELLECTUAL COEFFICIENT TEST
What is your IQ?
Famous actors who use it and influence in education