Bez kategorii

How to get started in IT?

How to get started in IT?

Lately, I’ve been seeing the question ‘how to get started in IT’ more and more often. I hear this question from my friends, their friends and I see it more and more often in various social networks. Seeing as more and more people are deciding to take an interest in this field as time goes by, I decided to try to give something of a hint in this regard as part of this podcast. However, I would like to warn you in advance that the answer to this question is not simple, as retraining is a process that consists of several steps and takes time. There is also no one foolproof way to transition to IT. I think anyone who has already walked this path can tell their own story. I decided to summarize all of my own thoughts on the subject in this article hoping that it will be useful to someone of you.

The IT industry employs a great many specialized engineers like programmers, administrators, DevOps engineers, database architects, testers and many others. However, there are also numerous non-technical employees like recruiters, managers, marketers, salespeople and so on. The latter group sometimes plays a very important role like explaining to a non-technical customer how a product works, or they find the right technical specialist to solve the customer’s problems. I dare say that without just these non-technical employees, contracting or consulting companies would not have such a big uptake these days. The two groups of specialists, the technical and the non-technical, complement each other perfectly and together push the process of creating extraordinary products. So before you start looking for a job in IT, ask yourself whether you really want to do technical work. After all, it is possible to do interesting tasks in IT, but only orient yourself in technologies, without a thorough knowledge of them. If you like to talk to new people every day and, as they say, “know people,” you might want to become a recruiter. If you enjoy going to meetings with clients, listening to their needs and relaying them to developers you may want to become a project manager. If you like to organize the work of others, are orderly and have an ear for people, you might want to become a Scrum Master who makes entire companies run more efficiently. Of course, each of these roles requires a different level of technical knowledge, however, just some orientation to the topic is enough here. So the first step on the way to finding a job in this industry is to ask yourself “do I really want to deal with technical issues” because this industry also needs people with other specialties. Only after thinking about the subject sincerely can you move on.

Suppose, however, after some thought, you decide to do technical work. You decide to go into programming, administration or some other engineering field. As you probably know, technologies are developing very fast, for this reason, during your work you will have to constantly further educate yourself and learn new things regardless of which path you choose. For this reason, too, you will have to catch up with your peers who are already working in the industry. It is possible, therefore, that the first months in your new job will be intense. Especially if you get a job in a startup, because there the pace of work is faster than in large companies. In large corporations, on the other hand, there is usually much less pressure, but the pace of personal development is also slower. I don’t have a clear answer here which option is better for a first job. Usually in a startup you can very quickly acquire skills that in a corporation you would take up to 2-3 times longer to learn for various reasons. And consequently, the potential earnings also grow much faster. However, you need to set yourself up for the fact that sometimes you will have to read or learn something when you return home. Personally, I recommend starting a career in a startup, but everyone has to make this decision for themselves.

Whether you’ll be working at a startup or a large multinational corporation, and no matter what growth path you choose, there’s a certain set of skills that every technical employee. Whether you will be involved in testing, game programming or anything else, you need to know a few basic tools.

The first tool is git. You probably know that applications or scripts are written in the form of properly prepared text files, which we call source code. But have you wondered how this code is developed? After all, several people in a team work on the same source codes. One adds some functionality, another corrects some bug, and a third tests some completely new optimization idea. After all, each of these people sometimes changes the same text files. So it would seem that after just two days each of them will have a differently working code and no one will have the result of all their work. Well, this very problem is solved by the git version control system. Git was created by Linux creator Linus Torvalds himself. Git is a tool that, among other things, just collects code changes made by many programmers and combines them into one. This allows code to be developed by people in any part of the world who have never even had to see each other in person. Today, Git is the basis for a great many even more modern technologies that are beyond the scope of this episode. And since it is the basis for other technologies, it is impossible to avoid it. For this, it’s worth starting to learn it even before you start looking for a job in IT.

Another thing we need to take seriously is a task control system. Every IT company uses some kind of such solution. The work of virtually every technical employee in the industry follows the same pattern. The employee looks at the list of tasks he should complete this week, takes the task with the current highest priority and when he completes it writes a short comment after which he closes it. From the point of view of such an employee, it is actually logging into the appropriate page, moving the rectangle with the name of the task from the “to-do” column to the “in progress” column and then moving it to the column titled “done.” Seemingly a simple action and seems at first pointless. I assure you, however, that it is not pointless at all. Someone who has prepared these tasks for us and is sometimes in another country watches the progress of the team every day. One glance at the task board helps him a lot in further planning of the work. If we don’t update our task board then not only do we make the work of our superiors more difficult, but we also make it clear that we haven’t done anything. It’s better to take such boards seriously and get familiar with one such task tracking system even before starting work. It takes just half an hour to play with an application like Trello, for example, and you will already know how we will perform tasks in virtually any company. Seemingly simple, but sometimes it’s hard to get used to it. For this I encourage you to start now.

It must be admitted that working in IT attracts introverts. However, I must warn those who are already downright antisocial and want to avoid contact with people at all costs. Working in IT is quite often a team effort. It is very rare to be the only person responsible for an area. Usually you are part of a team that you have to work with. This does not mean that we will spend the entire 8 hours in a few people writing code. Rather, it is a matter of not being afraid to ask someone from the team for help or clarification of an issue when we are no longer able to solve a problem ourselves. There will also be times when you will be in several people trying to solve a problem. Generally speaking, you will have time to do your tasks on your own, but be prepared to work together as well, because all the strength in a team that plays to the same goal.

The last thing you should absolutely have is self-reliance. Without a doubt, in the first days and perhaps weeks of working in a new position, many things will seem unclear and incomprehensible. This is only natural. Even experienced programmers will ask questions of their younger colleagues who have been working longer on this particular project. However, everything has its limits and your questions should be preceded by an attempt to face the problem yourself. If you ask someone on the team why something has been programmed in a certain way, they will probably give you a comprehensive answer. However, if you ask how some publicly available library works, you can expect an assertive answer along the lines of “read the documentation” or “ask Uncle Google.” Remember that a great deal of knowledge is already on the Internet and a very large amount of free code is available with documentation. It is also worthwhile to ask right away in a new job where there is documentation for the product that the company is developing. Sometimes it will turn out that there is none, but that’s another problem. In any case, before you ask someone from the team about any thing, try to answer the question yourself through all the ways you know. Remember that your new work acquaintances and boss are also watching to see if you learn, and will expect your number of questions to drop to a minimum over time.

One last comment regarding skills. You have probably more than once encountered such an opinion that working in IT, especially as a programmer, requires knowledge of mathematics. In my opinion, this myth was born in the days when in high schools we wrote some simple programs to calculate the area of a triangle or there other geometric figure. In the real work of a programmer, completely different skills are required, such as modifying code to a more concise and readable form, or finding logical errors in code. Of course, there are some programmers who program mathematical models, but increasingly this role is being taken over by analysts or statisticians. Nowadays, I wouldn’t expect a programmer to be able to solve differential equations. These people deal with completely different problems than mathematics on a daily basis. Therefore, if you want to become a programmer, you’d better focus on algorithmics and not mathematics.

We have listed some general skills of IT workers. Now let’s focus on what to do once you’ve made your specialty choice. Sometimes the question I get asked by friends who want to retool is “what sources to learn from?”. They usually expect an answer like “read book x” or “enroll in course y.” However, I must disappoint those expecting such a simple answer in advance. On more than one occasion I have met candidates fresh from a weekend course, who unfortunately did not know the basics. I have also more than once encountered people after several years of study in computer science who are not familiar with the git system described above. On the other hand, I came across a few times young people still in high school age who were intimidating in certain areas with their knowledge. The key to gaining knowledge in a subject is not choosing the right source, but the humility to explore it. There will always be something new to learn, always some new technology to tame. Even experts quite often read up on the latest changes in technology, because it’s simply impossible to keep up with it otherwise. Consequently, instead of asking where to learn from, just learn the way that is most convenient for you. Whether it’s online courses, books, documentation or whatever. If you have any friends who have been involved in IT for a long time, you will probably hear something from them along the lines of “there’s no point in learning from books, because they get outdated quickly.” And while you should agree with such a sentence, you should keep one thing in mind. This sentence comes from the mouths of sometimes experienced specialists. They can learn a new technology in weeks and sometimes even days. In fact, they only need to read a couple of articles and, based on the similarity to something they have already dealt with, they can cope by reading the documentation as they go along. However, if you are a novice, it won’t make any difference to you if you read a two-year-old book. After all, the point is to start your first job and not to become a master of technology in a few days. So learn as you are comfortable and most importantly, be humble, don’t pretend to know everything. Especially when it comes to a recruitment interview.

Another question I have received several times already from those interested in retraining is “is it worth it to get certifications”. This is where my opinion is clear. I think that at the beginning career, it’s definitely worth it to get some kind of certificate even before you start your first job. However, there is a condition. The certificate you want to get must be internationally recognized and have some value. You won’t impress anyone with a certificate for attending a one-day Linux course. If you want to show that you actually already know something, search the web for the phrase “best Linux certifications” or “best certifications for a Java programmer” and so on. Look on English-language sites. Quite quickly you will form an opinion of what is currently considered valuable in your field of interest. You will probably more than once find a ranking of the top 10 certifications for a given specialty. Choose something from the top three and start preparing. Personally, I prefer certifications that were preceded by a “hands-on” type of exam, that is, when you are given a laptop, a problem to solve and have some limited time to solve it. However, certificates obtained after an exam with closed questions are also sometimes well received. It is also worth mentioning that exams are quite often paid for and the higher the level of the exam, the higher the price. However, I think that treating one such basic exam as an investment will be a good direction. After all, if an employer sees a resume with no industry experience, but an interesting certificate in the relevant section, he may consider that he is dealing with a person who has already made some effort and spent money to train himself in the field. That is, he must take it seriously. Certainly, the employer will not pass by such a resume indifferently.

There’s one more thing I always advise my friends who want to rebrand themselves, especially if their work will involve frequent exposure to code. So primarily programmers, but also other specialties can benefit from this advice. Remember when I mentioned the git version control system at the beginning of this episode? Prove that you know how to use it, and how to write readable code. Create your own code repository on GitHub and make it publicly available. As programmers, you can put some simple Uber clone in it. As DevOps engineers, you can put some code configuring infrastructure in the cloud. Show your ingenuity. Don’t assume that your code will be used by someone in production. Only create your own repository and show it off to your employer. Even if the employer doesn’t like your code, you can ask what’s wrong with it and he will probably give you advice on how to improve it. As a result, for the next recruitment meeting you will already have better code and, most importantly, knowledge gained for free from someone experienced. I encourage you to create your own repositories and share them with the world not only at the beginning of your career, but throughout. It’s a brilliant way to get advice from all sorts of people on the Internet.

We get to the point where you are invited to an interview. Frankly speaking, in this industry there is no fixed pattern of such meetings. Sometimes the interviews are one-step, sometimes and three-step. Sometimes we are asked to solve a problem by writing code, sometimes we are just asked how we would solve a problem, and so on. No matter what the course of such a recruitment interview might be, I have only one piece of advice. Honesty always pays off. When someone asks a question that you don’t know the answer to, be honest that you don’t know and briefly let them know what steps you would take to find out if such a problem really occurred in your job. A simple “I don’t know, but I would check the documentation or search on the Internet” is sometimes enough. At recruitment interviews, it is very rare to find such candidates who already have total knowledge in the topics that happen to be needed in a particular project. Sometimes people with years of experience don’t know the answers to basic questions, because, for example, of the 8 technologies used in the project, they have dealt with six. So if you don’t know something, honestly admit it. A technical person will immediately notice an attempt to come up with something on the fly.

I hope these few general tips will help you find your new place in this extremely dynamic world. As is always the case when looking for a job in a completely new industry, it initially takes some time to catch up with those who are already working. Everyone starts a little differently, some start sending out resumes earlier, others later. I can only assure you that if you take to heart the advice I mentioned above, your chances of finding your first job in a technical position will increase significantly. It may even embolden you to negotiate a slightly higher salary. So I wish you the best of luck in learning new things and finding your new dream job. I also wish you success in your new position, once it comes along. Maybe someday we can meet on some project and learn something from each other?

Bez kategorii

Does your phone track you?

Does your phone truck you?

We get in the car, enter the address of the building we want to go to into the app, then comfortably drive to our destination with our favorite music. We are guided by our smartphone which knows our location and the exact route to our destination. Nothing unusual for the 21st century.

  • But how does our phone know our location?
  • Does it use a single technology for this purpose or multiple?
  • And most importantly, what price do we actually pay for this luxury of navigating us to our destination?

Our location is a very valuable piece of data because it tells us a lot about us. Based on it, our habits and preferences can be determined quite precisely. There are several technologies for tracking smartphone owners, the possibilities for getting to know us and using this data are quite formidable. Let’s take a look at all the methods our phones use to track our location.



Let’s start with the most obvious method of tracking our smartphone’s location, which is GPS. The Global Positioning System was introduced in the 1970s to allow the U.S. Department of Defense to determine location in real time. Every modern smartphone is equipped with a radio wave receiver operating at GPS frequencies. For the system to work properly, our smartphone must be in range of at least four GPS satellites and receive the radio waves they transmit. The satellite knows its exact location and the time it is transmitting a message through these waves. Knowing the time when the satellite broadcasts a message, its exact location at that time and the propagation time of the waves, one can determine the distance of the receiver, i.e. our smartphone, from the satellite. With at least four such distances, it is quite easy to calculate longitude, latitude and ellipsoidal altitude. In other words, basically all we need to calculate our location to the nearest meter is the message received from the four satellites. I point out here that our smartphone only receives and interprets signals that are available almost anywhere on the earth’s surface. There are currently dozens of GPS satellites orbiting the earth, so there is no concern that we will not be in range of at least four unless we are shielded by buildings. Despite its age, GPS is still one of the most accurate and efficient location systems. I have encountered a few times the wonder why, when boarding a plane and turning on “airplane mode,” the phone still knows the location. Well, “airplane mode” in our smartphones only disables reception and transmission on cellular network frequencies and sometimes wifi and bluetooth. It has nothing to do with GPS. For that matter, if our phone has “airplane mode” turned on but as a device is running all the time, it can still collect data from satellites and determine location. Once it has landed, when it has access to the Internet, it might as well make all the stored history available to the applications to which we have provided location services for review. Turning on “airplane mode” really only blocks the ability to download the digital map against which that pretty dot representing our location is displayed in our navigation app. Locating us by GPS can only be turned off if we toggle “location services” in our phone’s settings. Then only our phone will stop receiving radio waves from GPS satellites.

Cellular network

The apps we use quite often ask us to access our locations in order to monetize this data. Ad providers know that GPS may not work well in underground buildings or those with very thick walls. And this is where another method of tracking our location comes to the rescue – cellular networks. Every phone has an IMEI number, or International Mobile Equipment Identity. The IMEI number is unique to every mobile device in the world, regardless of the company that manufactured such a device. In addition, each SIM card has a unique IMSI number, or Internationa Mobile Subscriber Identity. In order for a phone to work properly, that is, to actually be able to call someone, it must connect to a mobile base station. Probably each of you has seen more than one telecommunication station with several or more than a dozen antennas pointing in different directions. This is the base station. Sometimes in our range is one such station, and sometimes several. In order for a phone to make a call, it must continually check how many base stations are in its range. For this it regularly sends its IMEI and IMSI into the air every second to let the base stations know it is in range. Each base station in range responds so that you can determine which one is closest. The message on each of these base stations remains, “a device with this IMEI and this IMSI was nearby during these hours.” But as if that wasn’t enough, our location can be determined using so-called triangulation. If we are able to determine the distance to the phone based on the signal strength from a given station, then with data from three such stations using simple geometry we can determine a fairly precise location. We know the exact location of the base stations and we know the distance of the phone from each of them. Thus, having the data on the location of the base stations and the history of IMEI and IMSI IDs sent, it is possible to reconstruct with a fairly high degree of accuracy where the phone was and at what time. Both the phone can keep a history of base stations visited, and the mobile network operator must at least some time keep a history of phones logged into base stations. Does either the phone manufacturer or the mobile network operator monetize the locations collected? Just to reiterate, this is quite valuable data, as it reveals our consumer habits. Knowing what places we visit can learn about our interests and suggest relevant advertising or offers.


Sometimes cell phone coverage can also fail, but this is not a problem for today’s technology. There are still other methods of locating devices. WiFi is one of them. Just as IMEI and IMSI in cellular networks are unique identifiers for the device and SIM card, so for WiFi such identifiers are the MAC address and BSSID, or Basic Service Set Identifier. If we are walking through the city with our phone and have WiFi turned on then we may not even connect to any router, but our phone, all the time will scan the nearby space for an access point. Our phones, in order to locate the nearest access points (i.e. wireless routers, for example), continually send their unique MAC number out into the world. If an access point receives such a message, it will respond with its unique BSSID. The BSSID is also routinely sent out into the space without prior request, if the default settings have not been changed. Walking through the city, we may come across dozens or even hundreds of such access points. Our phone can store the entire history of points found and make it available to installed applications. Such data later combined with data from GPS and mobile networks are geographically merged and global maps of WiFi access points are built. Such global maps contain all the information that was gathered by simply listening to communications between smartphones and access points even if they were not connected. They were merely trying to see what was around them. Global maps of access points are updated all the time. You can find some online and view them for free. So again, by having a history of nearby access points that we passed while walking, we can be located with great accuracy. The more access points we passed, the more accurately we can be tracked.


Bluetooth is another very useful tool for locating us. Wireless headphones, watches or other devices are indeed very convenient. However, in some ways, Bluetooth works just like other wireless networks. As soon as we turn on bluetooth on our device, it immediately starts sending information to the world about our phone model and unique MAC address, among other things. Yes, bluetooth also has its own MAC address, just like WiFi. Such a signal can be received by anyone, even an ordinary passerby, who also has Bluetooth on and doesn’t even need to connect to us. Bluetooth 5.1, as one of the latest standards of this technology, is capable of locating devices with centimeter-level accuracy. No wonder that in hotel buildings, shopping malls and other large facilities sometimes special bluetooth beacons are installed, which constantly scan the environment for nearby devices. Such data is sent to servers with high computing power and further processed for various purposes. Isn’t the information about all the stores visited and the time spent there a gluttonous morsel for the owner of a shopping center chain? If we have an app of a given shopping center, we can receive notifications about discounts on potentially interesting products precisely on the basis of the data collected in this way. Admittedly, bluetooth can be turned off quite easily. This will involve the loss of connection to wireless headphones, but on some phone models bluetooth will still send its identifiers out into the world. So you also need to disable bluetooth scanning in slightly deeper settings to actually prevent this from happening.

Ultrasonic Cross-Device Tracking

We’ve probably already listed all the wireless technologies. As you can see, they can all be used to track our location. But there is another technology that is not really associated much with connectivity. However, it is very effective and, worse, it is very difficult to tell if and possibly when our phone is using it. Ultrasonic Cross-Device Tracking, or uXDT for short, is a technology that can work on almost any device. It can be either a smartphone, tablet, laptop and basically anything equipped with a speaker and microphone. The basis of the technology’s operation is the ultrasound sent by the transmitter and received by our phone’s microphone. Suppose we are watching a commercial for some popular restaurant chain on TV and we have our smartphone with our phone with an app installed, also provided by that restaurant chain. Since our TV is equipped with speakers, it can additionally send out very high-frequency sounds not heard by humans during the advertisement broadcast. These sounds are captured by our phone’s microphone. If this restaurant chain’s application has access to a microphone, it can receive these sounds and interpret them accordingly. Depending on how this application is already programmed, it can further send data for analysis. Signals sent in this way can carry a gross of a dozen bits per second and over a distance of a few to several meters. It’s not too much, but it’s enough to transmit the ad identifier. The device with the application installed, i.e. our smartphone, can, upon receiving the identifier, perform such actions as sending our phone number, location or many other things to the advertiser’s designated servers. The location data can come from other technologies as well as be linked to the ad ID, for example, a given ad ID was only aired in a given location. What is even more attractive to the advertiser is that regardless of the manufacturer of our devices or their log-in data, they will be able to link all of our devices and create an even more accurate profile of our consumer habits based on the data we receive from them. Until now, if we used several different devices, the advertiser saw each of them as if it were a different person. On a smartphone, we were looking at pictures of cats, on a tablet at dresses and on a laptop at something work-related. With uXDT, the advertiser is able to link all of our devices to profile us even better. The ultrasound transmitter can be, also our laptop. If we watch an advertisement on it, or simply access some website that causes its speakers to emit ultrasound, profiling of us can also just take place. Virtually any device that can broadcast advertisements is capable of transmitting a uXDT signal. For this reason, owners of shopping malls, stadiums or other venues are interested in this technology to track their customers. Has it occurred to any of you to see that a smartphone app is using a microphone, even though it doesn’t seem to need it at all? This could be an indication that the app has such functionality. uXDT works on most speakers and most microphones available in stores. Not surprisingly, you can use this technology using almost any mobile device. It can be said that using this technology our phones do not overhear what we say, but they can easily listen to what other devices in close proximity have to say. The only way to block uXDT in apps on our smartphones is to disconnect or limit their access to the microphone. I encourage you to review your phone’s settings and pay close attention to whether certain apps have such access, even though they don’t seem to need it.


Information about our location is very valuable and quite easy to monetize. In fact, by simply carrying our favorite devices with us, we ourselves make it available to various entities, making us an even easier target for advertisers. There are really many technologies, and all used simultaneously, they are becoming extremely powerful and effective tools for profiling customers. This is what entire teams of statistics specialists are employed for. Big data is a big industry today, and it’s going to get bigger as the acquisition of consumer data becomes easier and easier over time. And whether we like it or not, wanting to experience a moment alone, let’s leave all modern devices at home, because nothing is a better spy than our own smart-gadgets.

Bez kategorii

Watch out for these protocols

Watch out for these protocols!

We all consciously or unconsciously use some kind of protocols. If we wanted to abandon them altogether we would not only have to give up using the Internet but also the telephone. Protocols are communication standards that include hardware specifications and data exchange between receiver and sender. As a rule, protocols are programmed in such a way that they serve users for many years and can be preserved when updating the programs that use them. That is, when upgrading some program from version 10 to version 11, we can expect that the new version will use exactly the same protocol for its purposes. That is, the program gets new functionality after the upgrade, but the protocol it uses very often remains the same. But little is said about the fact that, like software, protocols also age and need to be updated from time to time. Such a need is actually quite rare. However, if all indications are that you are using some old protocol that is already considered unsafe today, or has lived to see a much better replacement, you should very seriously consider an upgrade. In today’s episode, we will list 5 protocols that are worth watching out for for various reasons, and are still quite popular.

As a rule, protocols are updated along with either the software or the operating system. When we update the software we simply select in the programs settings which protocols we want to use. From the list of available protocols, you simply choose the one that is more secure or more efficient. So if any of you are using one of the protocols I am about to list, I encourage you to review the alternatives. There is no need to install anything but software here and it is a good idea to review the available protocols in our programs once a year.


The first protocol to always look out for is FTP or File Transfer Protocol, an extremely popular protocol for transferring files. The origins of FTP date back to the early 1970s, when the name “Internet” was just being forged in the academic community and there wasn’t much talk of hackers. In those days, there wasn’t much thought about how to secure data as it was transferred between computers. And so a standard was created that sends all usernames and their passwords during authentication in open text. They are not encrypted, so anyone who is able to intercept our communications using this protocol can very easily read these warrants and, if they want to, anything we download or send. FTP for its time was a great protocol, but as of today it does not meet basic security standards. Let’s imagine a scenario like this. Suppose I have my own website on some server. I send any files with graphics or code of this site there via FTP. Someone eavesdrops on my communication and quickly learns my username and password. I guess the easiest thing for such a hacker to do would be to log into my site and start hosting some content that would damage the company’s image. But that would probably be too easy. After all, I’ll find out pretty quickly that something is wrong and I’ll act quickly. I’ll learn to put more weight on security and just start preparing for such situations. What if the hacker is smarter? Instead of doing something that will immediately attract attention he will decide to put something on the server that will go unnoticed for a long time. Maybe it will be a cryptocurrency miner? Maybe he’ll attach the server to a botnet, or send spam? Or even worse, it will start checking if my email also has a password like the one for FTP. And if I don’t have another password everywhere, how many services have I given it access to in this way? I could multiply many more scenarios. FTP is one of the least secure protocols today, and yet I very often find myself using it routinely. If I don’t know exactly the path of communication between my laptop and the target server, I don’t use this protocol, because it is simply unsafe. There are good replacements for FTP and they are readily available. One is called SFTP and the other is called FTPS. Both are de facto FTP protocols, but with an additional encryption tunnel provided by other, newer technologies. These protocols provide security and confidence that the server you are connecting to is actually the one it claims to be. Ordinary FTP simply relied on trust. We had to trust that the target server was actually who it claimed to be. In general, I don’t completely delete FTP. For transferring some public data or inside some private network it can still be useful. I’m only sensitizing you to the security aspect of the Internet, which is an untrusted network. If our communication goes over the Internet, we must always take into account that someone may be watching, and FTP will not protect us from that.


The second protocol that is no longer worth using is POP3, or Post Office Protocol version 3. This is already quite a vintage protocol for retrieving e-mails from a server using mail clients such as Thuderbird, for example. This protocol has rather limited capabilities. For example, it only gives you the ability to download an e-mail message in its entirety. It is not possible to download the header of the message itself using this protocol, or to opt out of downloading drawings, thereby downloading only the text. How many times have we received a notification that we have received a message that clearly does not interest us. For example, we received an offer for a discount on some product and carelessly signed up for some mailing list. Seeing the header of such a message, we immediately know that it will land in the trash in a moment. Unfortunately, with the POP3 protocol, we can only download the entire such message to our devices and then delete it. This creates an additional risk. If the message is infected then we will download it to ourselves in its entirety, along with the malicious code. If we delete the message right away probably nothing will happen, but why create another dangerous situation and generate unnecessary network traffic?

Nowadays, most of us have several e-mail accounts. Even if one tries to minimize the number of e-mail boxes one usually has at least two, one private and one for work. If we use POP3 to synchronize our messages with our computer, they all end up in one directory. This loses the natural division of emails into personal and work matters. In addition, the default setting of email clients is almost always to download the message from the server and delete it from there. The result of this is that the email we have received is basically carried over, and once it hits one of our devices, it can’t go to another because it has already been deleted from the server. The result is chaos in the messages. We have some on our smartphone, others on our laptop and still others on our tablet. Nowhere do we have all of them. Usually, the mail client can be configured to leave emails on the server, however, you always have to keep this in mind whenever you configure a new client, such as on a new laptop or smartphone. A good replacement for POP3 is the IMAP protocol. It solves all the problems mentioned above. It can download both whole messages and only parts of them. Thus, we can see just the title and sender of the message and order the remote deletion of the message, without having to download it to ourselves locally. IMAP allows us to synchronize multiple directories and multiple mailboxes while preserving the directory structure. So if you have several mailboxes and different directories IMAP will preserve the entire structure after synchronization. IMAP synchronizes emails by default, that is, it equates the newer state with the older state. If some change has occurred on the server, such as a new email, the state on our device will be neutralized. On the other hand, if it is on our device that a change has occurred, the state on the server will be updated. So in order to delete something from the server, all we need to do is delete it locally at home. IMAP is undoubtedly a better protocol than POP3, for this reason it is worth taking interest in it.

Wired Equivalent Privacy

The third protocol should actually be disabled everywhere right away. This is WEP, or Wired Equivalent Privacy. We remember that wireless devices connect to the network via radio waves. Since these are radio waves, in order to eavesdrop on all communication it would be enough to simply have an antenna with the right frequency, stand close enough with it and start receiving. WEP was introduced in 1997 to secure these communications. Since you can’t hide the transmitter and receiver, you have to encrypt the messages you send and receive. This way, despite the fact that we know which device is sending and which is receiving, we will not know what the content of such communication is. The purpose of the WEP protocol was precisely to provide such encryption. Unfortunately, over time, as the computing power of computers has grown, breaking WEP has become quite easy. Today, after collecting enough data with such an antenna, the process of cracking a password can take less than 60 seconds. Although WEP has been considered outdated since 2004, you can still come across devices that still use it. I encourage you to immediately review your WiFi router’s settings and replace the WEP standard with the newer WPA2 and set a strong password of at least 16 characters. If your router only supports WEP or WPA standards and does not support at least WPA2, I would seriously consider replacing such a device with a newer one. I encourage you to review your devices and, of course, change the standard password to a longer and more difficult one.


The fourth protocol to watch out for is HTTP. This is the very protocol that we use to look at some website and also the same one that our smartphone apps usually use to connect to their servers. I will mention at the outset here that it is impossible to stop using HTTP altogether. In the era of web applications, HTTP is a total mainstay. I just want to sensitize you to the difference between HTTP and HTTPS. HTTPS is basically HTTP with an additional encryption tunnel that guarantees security and allows you to verify that the server you are connecting to is actually what it claims to be. When we connect to a website over HTTP, every device in the communication path between us and the server is able to “see” what we are doing. HTTPS, by providing encryption, not only hides all of this, but also allows verification of the sender. It can be risky to provide any passwords over HTTP or to download files that are important to us. On the other hand, however, if we’re looking for our dentist’s website just to find a phone number there to make an appointment, one might consider that additional security is not that important. Of course, I already leave the question of sense of privacy to each listener to judge according to their own standards. In any case, using HTTP without an additional layer of security may be acceptable, depending on the specific case. Unfortunately, to this day I still see unsecured websites with contact forms where you have to enter your personal information and phone number. I advise against trusting such sites. Providing HTTPS is already very cheap today, so its absence is, in my opinion, a sign of negligence. And recognizing whether you are using HTTPS is very simple. Just look for a padlock or certificate symbol in the URL bar at the top of our browser window. When we click that, we should be able to see the security certificate. Passwords, logins, personal information, financial information or actually anything that should remain classified should be sent over HTTPS and not over unsecured HTTP.


The last protocol to watch out for is SMB version 1. Some of you are probably wondering what SMB even is and what it is used for. Well, SMB, or Server Message Block, is a protocol used in Windows systems to share files and printers over a network. It is the most popular protocol for such applications. It has already seen versions 2 and 3 and newer. However, from the point of view of security, we should be most interested in SMB version 1, or SMBv1 for short. The history of this protocol dates back to the early 1980s. SMBv1 was declared outdated in 2013 and is not installed by default on newer operating systems. However, there are still many old devices, such as network printers or file servers still using this protocol. Most people recognize that since something works, there is no need to replace it. And so we very often maintain a very old infrastructure, when laptops and smartphones are replaced much more readily. And in order to make these devices work with an old printer or file server, we sometimes knowingly run an outdated protocol ourselves. In May 2017, an entire group of SMBv1 protocol vulnerabilities, called EthernalBlue, was published. Using these vulnerabilities, a hacker is able to run virtually any code on a server running this protocol or cause communications to be blocked. Also in 2017, there was a massive hacking attack on Windows computers that used this protocol. These machines were infected with malware called WannaCry or WannaCrypt, which encrypted the entire contents of their drives and demanded a ransom to decrypt them. It is estimated that some 300,000 computers in nearly 100 countries were infected in this way. In fairness here, it should be acknowledged that the Microsoft corporation has released patches for SMBv1, but it is estimated that about a million old devices that have not been updated may still be vulnerable. I encourage you to check if by chance one of your older devices is not using SMB version 1. It’s possible that all the years the device will serve without problems. However, it will be enough if you give your friend your wifi password, or if you unknowingly bring home some malicious code in your laptop, and you may fall victim to an attack. It’s not worth the risk, especially if you store some important files on a server with this protocol.


Like software, protocols also age and are phased out. The aforementioned do not exhaust the list of vulnerable or obsolete protocols still in use. However, the five protocols listed may unknowingly be used either at home or in the office. They are basically part of our daily lives, when the others that I have chosen to omit may already belong mainly to the interests of network administrators. But whether we use our small home network or go to the office, we should be careful about the protocols listed. Some of them may simply extend our work time, while others may expose us to big losses. And how we use our computers and secure our network is up to us.

Bez kategorii

8 words we understand diffrently in IT

8 words we understand diffrently

I guess every industry has its own terminology and sometimes its own jargon. IT is no exception. If I were to describe how IT professionals communicate I would say that they use a language composed of 80% Polish and 20% more or less correct English. Very often in their speech there are also three-letter abbreviations or anagrams taken directly from English-language documentation of new technologies. The industry has been growing rapidly for a long time. It is hardly surprising that the naming of some new technology is adopted immediately and there is no time to create native words. However, there are some exceptions, quite a few of which were adopted years ago. IT engineers sometimes use words that sound exactly the same as words used by everyone on a daily basis, but have a completely different meaning to them. This sometimes leads to misunderstandings. In today’s episode, I’ll list some words that IT engineers understand a little differently than everyone else. I hope that after listening to this episode, your understanding of the familiar IT professional will become a little easier.

1. Environment

The first word that IT professionals understand differently than non-IT people is “environment.” For an IT professional at work, “environment” is nothing related to nature or biology. Rather, for an IT professional, environment is a word used to describe the surroundings of a running application, or one might say its environment. An application’s environment can be either an operating system or a cloud or server room where it has purchased colocation. An application, as it can be freely copied, can have several such environments. For example, one specifically created so that new code changes can be quietly tested in it, without fear that the customer using it will demand a refund. A second such environment can be created specifically for the customer, so that he in turn can use the application peacefully, knowing that no one will shut it down for him on a test basis. Ultimately, the customer should be provided with a product that has been tested and is ready for use. Environment is generally understood as the environment in which the application runs. It can be both about closer environments, such as the operating system, and further away, such as the server infrastructure. Environment for an IT specialist is quite a broad term, for this it sometimes needs to be specified in conversation.

2. Production

The second word that IT professionals understand differently and which is at the same time very much related to the word ‘environment’ is ‘production’, ‘Production’ is a shorthand term for ‘production environment’. Since we can have a test environment, on which tests of the latest changes to an application take place, we can also have a production environment, that is, the very one used by customers. On such an environment the application runs with the latest and tested changes considered stable. Sometimes in companies developing some applications we hear the phrase ‘we can let these changes go to production’. This means that the code changes made by the development team have already been tested on the test environment and can be released on ‘production’ i.e. the production environment for the customer to use. Users using the application always use the very copy of the application that runs on the production environment, or ‘production’ for short.

3. Abstraction

The third word that we also often use and which is also a spark for misunderstandings is the word “abstraction.” Abstraction is probably associated by most people with something contrived and unnecessary to anyone, or a strand of modern art. In IT, this word has a completely different meaning. It also stems from the specific nature of programmers’ work. Around the world, different teams of programmers within even one company sometimes deal with completely different, unrelated issues. Some deal with programming the communication of a Bluetooth mouse with our laptop, while others deal with the design of graphical interfaces. All of them, of course, work so that somewhere all their work will merge into one coherent product, for example, some program. But both the Bluetooth programmer will not have time to study the GUI, and the GUI programmer may not even have a clue how Bluetooth works. For this, every programmer tries to encapsulate his or her work in what we just call an ‘abstraction’ or ‘abstraction layer’. A Bluetooth programmer encapsulates the vast majority of his programmed functionality in parts of the code that he only knows. He makes the other parts available so that they are very simple to understand and use. If another porgrammer working on some other issue needs to use Bluetooth, but without changing the way it works, all he will need to know are these few simple documented functions. He won’t have to study exactly thousands of lines of code how Bluetooth works. He will only use functions with such descriptive names as ‘Connect’ or ‘Disconnect’ etc. How exactly these functions work and how many processes in them is no longer relevant to him. So a person using the results of a Bluetooth programmer will quickly grasp this without having to learn the implementation details. This also allows the Bluetooth programmer to deal only with bloothem, while a programmer with another specialization will take care of his duties. Such encapsulation of more functionality in a kind of black box is what we call creating a ‘layer of abstraction.’ If we want to use some functionality, all we need to do is to use a certain small set of commands that the black box offers, and what happens in it doesn’t have to be known to us. One might even be tempted to say that building ‘layers of abstraction’ is very much the reason why IT is developing so quickly. One programmer will build one layer of abstraction, another will build another on top of it, another will gather several and build another on top of them. As a result, somewhere on top of these layers, extremely complex programs are being created that combine a whole lot of functionality and are getting easier to use. So even though the word “abstraction” sounds a bit corny, it’s really meant to describe the process of reducing something complex, to a simple form that most people will be able to intuitively understand.

4. Object

I Speaking of abstractions, it’s impossible to leave out the fourth word, ‘object’. Object is precisely one way of creating layers of abstraction. If a programmer were to program a car, he would create an object, that is, such a virtual creation that would contain data, such as engine capacity or top speed. Such an object would also have certain functions like ‘start engine’ or ‘accelerate’. Having such a type of object, the programmer can very easily create as many such virtual cars as he wants, and only with the parameters of the object regulate for himself whether it is a sports car or a delivery truck. If the programmer wanted to refuel such a car, he would have to take it to another object of the ‘distributor’ type and run the ‘refuel’ function. It is worth noting here that the function is called ‘refuel’. It is not a dozen functions named ‘Turn off the engine’, ‘open the hatch’, ‘unscrew the cap’ and a dozen more, including ‘pay’ at the end. All it takes is one simple ‘refuel’ and the whole process is carried out to the best of the programmer’s knowledge of that car model. Objects greatly simplify the work of programmers. Not only do they encapsulate data about an object inside it, but they also allow us to act on that object through functions whose details we don’t really need to know. Sometimes when we see some very descriptive function name like ‘accelerate’ we don’t even have to think about what it is for and how fuel injection works. We just press the gas and go faster.

5. Print

The fifth word that primarily programmers understand differently is ‘print’. This word we can actually translate ourselves silently into the word ‘print out’. In countless programming languages, the ‘print’ function or its similar functions literally write out some text to a file or screen. Usually, as a programmer learns a new programming language, the first thing he or she does in it is to use the print or similar function to write out the words ‘hello world’ or ‘hello world’ on the screen. This has become a kind of tradition. If any of you pick up a book on the basics of programming in almost any language, there is a very good chance that in its first pages will be somewhere written the code print(‘Hello World’). When such code is run, the screen will display this simple ‘Hello World’ message. And since the English word ‘print’ has its equivalent in Polish, sometimes programmers say to print or ‘print out’ some text. They then mean to simply write out on the screen.

6. Exception

The sixth word I wanted to mention is ‘exception’. An exception is actually something fairly special for a programmer, but already in a different way than we would like. An exception is a kind of error that occurs while the program is running. When a programmer performs some action and realizes that sometimes not everything can go as planned, he should prepare his code for such an exceptional situation. For example, if a very large file is being loaded, the programmer may assume that on weaker computers memory may become completely clogged. In such a case, during such loading, the operating system will report an exception for too much memory usage by the application. It is the programmer’s duty to prepare a set of routines for such a circumstance, which will aim to remedy the situation in some way, such as loading only part of the file. Otherwise, the program can be forcibly closed. Did it happen to any of you to see an error notification that reads ‘unhandled exception’? This is precisely one such exception. And what’s worse, for him there was no repair procedure programmed. For this often immediately afterwards the system itself forces the closure of such a program. Thus, the exception is something very important for the programmer. It is indeed an exception, but unfortunately in the bad sense. In general, exceptions can both carry information about the type of error, and can be unspecified. However, every time the programmer must be ready for it.

7. Server

The seventh word, which I have already said myself more than once in previous episodes, is ‘server’. This word you most likely already understood correctly, but there is one point that deserves clarification. A server can be a physical or virtual machine that runs some kind of operating system. However, sometimes the word ‘server’ is used to describe a server application that is waiting somewhere on that physical server for new tasks or connections. We sometimes hear from administrators ‘there is a mail server running on this machine’. And it is in this context that a mail server is some mail application running on some server. Sometimes such ambiguity creates some confusion, but quite quickly you can learn to extract from the context what server our interlocutor is referring to. As a last resort, you can always ask, and this is no shame.

8. Kernel ­čśÇ

The last word, which is especially often the cause of somewhat amusing situations among IT professionals, is ‘kernel’. The kernel of an operating system is the kind of program that is responsible for virtually all the communication of electronic hardware that sits in our computers with software installed. It is basically the basis of the operating system. Many would say that the kernel is basically the operating system and everything else like the GUI, drivers and others are add-ons. The kernel sometimes also called the ‘core’ of the system runs almost from the very moment the computer starts up. We can’t shut it down without shutting down everything else. The kernel is involved in virtually every action we do on our computers. After all, a computer is an electronic device, and we can’t do anything with it if we don’t communicate with those electronics. The kernel is just such a center of communication with the hardware. In Linux systems, the kernel is updated quite often like other software. You can therefore keep several versions of it with you locally in case some new one doesn’t work for some reason. So don’t be surprised if an IT friend, most likely an administrator, tells you in all seriousness that he needs to remove some kernels for himself because they are old and take up too much space…. Well, in English, however, a little more cleverly resolved this nomenclature. For this, sometimes a direct borrowing from the English. And so instead of the word “kernel” (in polish) we use the word “kernel” (in english) – the most popular substitute.

And many, many more...

There are many more words understood differently by computer scientists. To know them all you would probably have to start working in this profession, which I strongly encourage you to do. I hope that a few of the words listed here will help some understanding of familiar programmers, administrators or testers, as IT is increasingly beginning to appear mysterious and inaccessible. Maybe it’s because of so many layers of abstraction?

Bez kategorii

What are web applications and how are they developed?

What are web applications and how are they developed?

Often we hear that someone has some new app on their smartphone that solves their daily problems or makes something easier. An app for ordering a cab, ordering food, buying movie tickets, navigation or whatever. But in fact mobile applications (that is, the ones installed on our smartphones) are quite often programs to remotely operate a web application that runs somewhere on the Internet. In this approach a mobile application is created mainly for the user’s convenience, so that the user can use the web application in a nice and easy way. Without a web application, a mobile application makes no sense. We have access to the Internet practically everywhere, so we don’t even always think about which of our mobile applications talks with the web application and which of them works independently. And it’s the web apps that actually do the most work and troubleshooting, then send the mobile ones the results, which can then be displayed nicely. Let’s talk about what web apps are and how they are developed.

Let’s start with the basics. A web application is a collection of server applications that work together for some purpose. A web application can consist of such server applications as a database, a business logic application, a payment application, and let’s say an application that allows you to display an interactive web page. Since a web application consists of several server applications, we can call these server applications components of the web application. So we have several components that run on one server or several servers. Some of them we can develop ourselves, and some of them we can simply run using ready-made solutions. The developers’ task is to provide appropriate ways of communication between these components, and the task of cloud administrators and engineers is to properly prepare the infrastructure and supervise it.

As a rule, applications are written. There are many programming languages and they change over time like almost everything in this industry. However, we can distinguish two groups of languages at this stage.

The first group are interpreted languages. When programmer writes code of such application, to run it he needs so called interpreter. Interpreter is such a program, which reads the source code in text form line by line and executes programmed instructions on this basis. So we can have the whole application source code, but if we don’t have an interpreter, it won’t give us anything, because we won’t be able to run it. In addition to the interpreter and the source code of the application written in such a language, we must also have libraries, which our code uses. The library is simply a set of functions and parameters, which is used by the application. If writing every program in the world involved having to redefine each time the standards for math, writing to files, reading, calendar, and hundreds of other things that our program does, the application development process would take many years. For this reason, libraries are created and they encapsulate a set of functionality that other programmers can later use in their projects. Libraries can therefore be developed independently of the application and used in many different, unrelated applications.

The second group of languages are compiled languages. In this case, a compiler is needed instead of an interpreter. A compiler is also a program that reads the source code of an application, but turns it into what is called machine code. This very process is called “compiling” the code. That is, in other words, from the source code we create a file that we can already run. After compilation, a compiler is no longer needed to run such a program. However, also in this case usually libraries are needed. Libraries of such languages are also compiled, but we can no longer run them directly, but through the program we write, we can use their functionality.

Now that we know that we have two types of programming languages for developing web applications, we can consider how these applications are developed. Let’s assume for the purpose of this episode that I am developing a very simple web application, consisting of so called “frontend” which is a component displaying graphical interface and receiving requests from my clients, “backend” which is a component with programmed entire business logic of my web application and the database which stores data about my users and all other data needed for the operation of the entire application. Web applications are of course much more complex, but let’s focus on a simple application consisting of these three components. Two of them clearly need to be developed by me, frontend and backend. It is these components that make up the core of my web application. The frontend displays the entire GUI, assembles client requests into coherent structures, and sends them to the backend. The backend receives the request, constructs a query to the database, sends it and then performs actions with the received results. If necessary, it sends several queries to the database. Once it has received all the responses, it re-packages a coherent response and sends it to the frontend. The frontend, understanding the responses received from the backend, displays the results. Let’s illustrate it with an example. Let this web application I am developing be used to book and rent ski equipment. We go to the website where my application can be found. A nice and colorful user interface appears to our eyes. This is the part of the frontend that shows us text or other fields that we can fill in specifying what we would like to rent. When we enter in the appropriate fields that we need skis with specific dimensions and a helmet in size S and green color, the frontend will collect all this information and send it to the backend. The backend knows how to deal with such a request. Based on the given preferences, it creates an appropriate query to the database to find out if such equipment is available in stock. The database answers that the skis will be available, but the helmet in size S and green color was not found. The backend, seeing that there is no helmet of this color, can send another query to the database, but without the color criterion. When it receives some results, it will reply to the frontend with a list of available skis (because we were mainly looking for skis) and the information that no green helmet was found, but helmets in other colors were found. It will also provide a list of helmets found. Frontend receiving this response knows how to gracefully apologize to the customer for the inconvenience and subtly suggest other helmets.

In this example I’m responsible for frontend and backend. I can run any database I want. Nowadays there are various databases available, so there is not much sense in creating them by yourself. So how do you develop the other two components?

Let’s assume that the frontend is developed by a team of Python programmers, one of the most popular interpreted languages. Let the backend be developed by a team of Go language developers. Both components are developed separately. Of course, it is not possible to make these teams fully independent of each other, but it can be done to some extent. Let’s say that a bug was found somewhere in the backend and one of the programmers just fixed it. So he sends his code changes to the code version control system. He declares that the changes he made are ready to be accepted and made permanent in the backend. Someone from the team looks at his changes and makes some comments if necessary. This time, however, there are no comments because the change is small and the bug, if not fixed quickly, could leak customers’ personal information. Our hero’s teammate therefore accepts the code changes and makes them permanent in the code of this component. At this step, all human interaction ends. All further fate of the code is already automated.

As soon as the source code version control system notices the change described above, it starts what in our nomenclature is called a “pipeline” which could literally translate to the word “pipeline”. Unfortunately, the Polish equivalents are not always accepted in the industry terminology. The pipeline in question is a process that is executed when the appropriate circumstances arise. Changing the code can be one of those circumstances. During this process, many things happen. Among others, the component is automatically compiled with the new code and tested. Then, the whole web application is placed in a specially prepared test environment to perform integration tests between individual components. Next, system tests are performed to perform some actions affecting the whole application in search of errors. Then more tests are performed and then even more tests. Generally speaking, the more tests are performed, the greater the chance that bugs will be discovered at an earlier stage, and not when the application is already available for customers. The order and scope of tests depends, of course, on a particular web application and on decisions of test engineers.

During this process, something happens beyond testing that today constitutes the core of modern web application delivery. It is containerization. Containerization is the process of encapsulating an application component in a specially crafted object called a “container”. The container imitates an operating system and allows the component to run in it. Of course, you can ask the question why to do it at all, when we can run a component of a web application without additional complications. Contrary to appearances, there are many advantages. First of all, with the help of proper container management you can get a significant resistance to failures. The specificity of a container is such, that if the program running in it turns off, the container management system will notice this event and immediately manages its restart. It will repeat this process until it happens. So, if the component works relatively well and rarely crashes, it should not significantly harm the stability of the entire web application. The second advantage of using containers is the ease of migrating applications. If every component of the application works as a container, moving it is very easy. At times, large companies with very large applications running in the cloud have decided to switch cloud providers. Moving containers was fairly straightforward in this case, which could not always be said for applications that were not containerized. A third advantage is potential scalability. A container is, in principle, a small piece of a web application that should be able to run both alone and as one of hundreds of the same containers. For a container management system, running one or hundreds of containers is just a matter of setting one number in the configuration. It is also possible to dynamically scale the number of containers depending on the application load. The fourth advantage is keeping the servers in order. In the old days, updating applications used to be quite breakneck. More so if programmers updated some libraries in their code or updated interpreter. Generally, for many reasons, any changes on the server were often a challenge, because you had to deal with a whole bunch of dependencies. Containers solve this problem already in advance. It is during the containerization process that both the application component, its libraries and all the dependencies, including the interpreter if needed, are placed inside the container. In other words, a properly built container should have absolutely everything needed to run the component, but absolutely nothing else. So updating an application in a container management system involves only one or more container version entries in the configuration, instead of hours of admin work. The system will simply note that it’s supposed to run a newer container from now on, and swap out the older containers for the newer ones.

Let’s go back to our “pipeline”, much more commonly referred to as “spidering”. During this process, once the changes made by the hero of our story have been accepted, a lot of testing and containerization takes place. If everything goes according to plan, the new version of the backend will become available to customers as soon as the process comes to an end. It is up to the creators of such automation whether to make the new version available to customers immediately or after they have explicitly agreed to the change.

Two things are worth noting here. The development team can focus almost entirely on code development. Programmers’ salaries are as high as they go and for this reason a great way to maximize the use of specialists’ time is to simply remove them from the problems related to service delivery. Programmers only deliver code and each of their individual tasks ends when the tests performed by the pipeline are successful. And since they don’t have to do the testing manually, they will be able to start another task, since at that time the automated testing will be happening somewhere on some server far away. The second thing worth mentioning is that all this automation seems to be doing activities that are inherently boring. I guess any employee, especially the more ambitious ones, if they had to copy code several times a day, compile it, run a dozen tests manually, containerize it, and so on, would pretty quickly say goodbye to such a job. Automation is supposed to take over the boring tasks from people so that they can just take care of the more creative activities.

Nowadays, when developing any code, we aim to fully automate virtually all processes. This is guided by a simple principle: “if you’ve had to do something more than once, it means it’s worth considering automation for that process. The cloud and various platforms have made it possible to automate a great many processes. From such simple tasks as notifying about failures, to setting up and modifying huge infrastructures in the cloud. Many companies have seen a significant acceleration in the development of their applications, thanks to automation. No doubt this trend will continue. The question is what awaits us in the near future, because technologies are already developing so quickly that it is difficult to predict even the next few years.

Bez kategorii

What exactly is the cloud?

What exactly is the cloud?

‘Cloud’ is a very broad term. It is repeated quite often in the context of IT solutions. So often we see the word ‘cloud’ in the name of an application that we could even get the impression that it is an advertising slogan which is supposed to lure customers with modernity. If something has the word “cloud” in its name it is probably more reliable and accessible from every corner of the world. The truth is that applications running in a different environment, which is not the cloud, can work just as stable and be just as accessible. As users, we basically shouldn’t care at all whether an application is running in the cloud or some other infrastructure. We should only care if it is available and meets our needs. The fact whether it works in the cloud should be irrelevant to us. It is rather the engineers of IT companies should deal with it, because they are responsible for the operation of services. So why is there so much talk about the cloud? Because the power of cloud is countless possibilities, which properly used can actually significantly improve stability and many other aspects of the application. On top of that, these capabilities are available at a few clicks, which is very convenient. In addition, the cloud provides great opportunities for cost optimization, which is especially attractive to companies that pay very large amounts of money to maintain their server infrastructure. And basically, companies should be more concerned with what the cloud is and how they can use it to maximize their own benefits. Users should just use their products. Let’s explain what the cloud really is and what it can give us.

To understand what the cloud is, let’s go back in time about 20 years. Let’s play the role of an entrepreneur developing some software. Let’s assume that this software works in a standard client-server model, i.e. there must be some server running the application available on the Internet. The server is remotely accessed by a client, i.e. a program operated by a user. The user does not care where the server is physically located, he only knows that he must have access to the Internet to use the application, and he uses the client installed on his computer to do it. In such circumstances, our entrepreneur knows that he has to buy several professional machines, that is servers, mount them somewhere in a place prepared for this, install on them the operating system, his server application and some additional software to monitor all this. Most of these activities can be done by the entrepreneur himself or with the help of his employees. However, there are some problems. Servers by nature heat up very quickly, need a lot of power and are usually very noisy. Due to the fact that the office sometimes experiences power cuts during the night and the servers have to work practically non-stop, our hero rightly realizes that the office is not the best place to store these business-critical machines. So he decides to buy colocation of his servers in a nearby data center. Basically, this means that the nearby data center provides space, power, Internet access and cooling within its building. The building is, of course, also adequately protected and monitored by security. At the end of each month, the data center owner submits a bill to the entrepreneur. As soon as our entrepreneur has installed the servers in the data center he remotely manages them and comes on site if necessary. The server applications of our entrepreneur thus run on the servers located in the data center.

The described way of maintaining infrastructure is still used today. Anyone can buy a server and place it in the data center of their choice and then use it as they need. The biggest advantage of this approach is full control over your own resources. We can buy almost any server, install on it literally whatever we want, and if there are several servers we can connect them practically by any network. However, in this aspect we will need some arrangements with the technical staff of the data center. In other words, we have full freedom, excluding some details mentioned in data center regulations or technical possibilities. This type of infrastructure which is practically entirely managed by us is called ‘On-site’ or ‘on-premise’.

Full control over the ‘on-site’ infrastructure is an unquestionable advantage, but let’s pay attention to the disadvantages of such a solution. First of all, server activation time may take up to several dozen days, depending on the size of the order or the number of available specialists. This type of infrastructure requires a lot of time and sometimes an actual visit on site. We also have to configure such machines ourselves from start to finish, i.e. from mounting them in their place, through installing the operating system, reinforcing security and ending with the installation and management of applications. Generally speaking, the more options we have, the more time we spend on proper settings.

The configuration possibilities are huge, but nobody really uses all of them. There are simply too many of them. So it should come as no surprise that many businesses would easily be willing to give up some of those huge configuration possibilities, which don’t always provide tangible benefits anyway, in exchange for the speed of infrastructure development for their needs. After all, it’s all about meeting the needs of their own customers quickly. Optimization that speeds up the work of servers by a few percent, but requires long hours of work, can be given up.

In July 2002, Amazon corporation created what today we would call a cloud in the IaaS model, i.e. ‘Infrastructure as a Service’. It is a type of cloud that allows you to create a virtual infrastructure in just a few clicks. Virtual database, file server, VPN, application server, authentication, network that connects all these services, monitoring, whatever we want, we can create in minutes and connect them together. All this will be created in a virtual form and the physical infrastructure will be taken care of by the cloud provider. The possibilities of configuring all these services will be actually smaller than if we manage these machines ourselves, but most users will be enough. And most importantly, the speed of infrastructure development will be incomparably greater. But that is not all. If we use the resources and potential of the cloud optimally, we can significantly reduce costs. Let’s assume that today I need three servers to make my company work properly. My employees use these servers in 80%. I am aware that in a week there will be a few more people employed for a three-month contract. In an on-site environment I would have to buy them additional servers which would only work for a short time. After the expiration of the contract I would have to sell the servers. The cloud gives me the possibility that if I need additional resources for a while, I just buy them in seconds, and when I don’t need them anymore, I give them back and don’t pay for them. I only pay for the number of hours I used them. Pricing is usually by the hour. But that is still not all. Let’s say my company has grown and I decide to open an office in another country. If I manage my cloud resources properly, I’m able to create a copy of my infrastructure overseas and give it to employees in another country to use. These are just a few of the dozens of possible scenarios for putting cloud IaaS into practice. The power of the cloud is automation, virtualization, the wealth of services available, and the ability to connect them together. The most popular Infrastrucutre as a Service clouds are Amazon Web Services, Microsoft Azure and Google Cloud Platform. These are so-called public clouds, meaning they are available to anyone. But can you have your own cloud? Of course you can. OpenStack project developed for over 10 years focuses on creating software that allows you to run your own cloud in your own data center. OpenStack is also used commercially, for example by OVH.

Well okay, we can automate the process of creating, modifying and even deleting virtual infrastructures with “Infrastructure as a Service”. But since we’ve already sacrificed some configuration capabilities in order to gain more speed in the development of our business, can we go a step further? Can we create some framework for the application that our company develops and simply upload such application to some platform without even worrying about virtual servers? Well, we can. PaaS or “Platform as a Service” is a type of processing in the cloud, which transfers not only the management of physical machines in the hands of technical staff of cloud provider, but also the management of virtual resources. Let’s assume that I am developing some web application. If I decide to use some PaaS model, my developers will only have to prepare the application code to fit the framework imposed by the platform. Further deployment of the application on the platform and its eventual migration between similar platforms will become trivial. Moreover, if I properly prepare my application, the platform itself will manage it in a quite intelligent way. For example, if there is a crash in the application, the platform itself will take some remedial action, such as restarting the application. If the platform notices increased traffic in the app for example related to a sudden surge of interest in the app, more resources will automatically be deployed in a short period of time to ensure smoothness for my users. Such functionality is especially important on days like Black Friday, when the number of users using an online store can increase several times within hours. PaaS is able to automatically adjust resources to the required demand, so that no one will feel that the website is running slowly during the hot periods. Resources will be automatically released when they are no longer needed. PaaS, however, is able to do much more. In addition to automatically scaling resources, it also provides great opportunities to deploy new changes as soon as they are made by developers. In an on-site environment, changing software versions can be quite cumbersome. In PaaS conditions, the process can be brought to complete automation. In other words, customers may not even notice the service interruption when the software version is changed on the fly. Users will get new functionality without being cut off from consuming the content of the application they are using. The most emerging platform of this type is Kubernetes technology, which is making a huge career today.

The last model of cloud computing is SaaS or ‘Software as a Service’. This is the model that we all most often get to use. Previous models are of interest to programmers or other IT engineers, but this is the model most people associate with the cloud. ‘Software as a service’ is a model in which the provider of a web application simply makes the functionality of its product available to users. What is the physical infrastructure, networks, servers, operating systems, virtual machines, resource management or application logic, it all remains the responsibility of someone else. If you’re developing a SaaS product, you may need to know all those aspects of it, or at least some of them. But if you’re a user of an application in the Software as a Service model, you’re only interested in using it. So the next time someone sees an application with the word “cloud” in its name and, after buying access, gets immediate access to its functionality without thinking about where it actually works, it’s probably a SaaS application. When we log on to such an application, remote infrastructure embedded on physical machines somewhere hundreds of kilometers away will automatically adjust to our needs. Maybe the application will request new resources to keep us running. Maybe exactly during our work, a new version of the application will come out and will be installed on those remote resources in such a way, that we won’t even notice it. Maybe during our work some of the resources will fail, and we still will not feel it, because the tasks that handled our resources were quickly taken over by other machines. All this will be imperceptible for us, because that’s how the cloud should work, to push away from us some problems. Can I create a SaaS application myself and make it available to my users so that their experience is always smooth and fault-tolerant? I can and for this purpose I should use resources in the cloud, because it is the cloud that gives me such a wide range of possibilities.

As you can see, saying something is in the cloud can mean different things. What is certain, however, is that somewhere beyond our knowledge there are some automated processes taking place to keep things running smoothly and reliably for users. These processes may be related to modifying the IT infrastructure on which the application runs. What we see in a web browser using a given application is sometimes the tip of the iceberg. There is more, much more technology under the surface than we realize. In the past, infrastructures were much more static. Today, they are changing, adapting to customer needs and optimizing costs. They have become almost as dynamic as software. That’s why so many companies are migrating their services to the cloud, because companies should be the ones to see its appeal. And while the cloud may not always be the ideal solution for everyone, my experience tells me that the vast majority of functionality can be migrated. Especially if we don’t need very specific devices. We don’t need to develop our own software within our business. It is enough that our company performs some activities that can be automated. Then also the cloud can help a lot. And you also don’t have to be an IT company to do it. It is enough that some action that we perform repetitively is possible to program, then it is worth giving such a task to the cloud, to take care of something more creative. The possibilities are on the table and there are many of them. You just need to review them and choose the ones that are best for us. And human creativity knows no bounds. So I encourage you to explore the almost limitless possibilities and use them to solve your problems. Who knows, maybe just maybe someone of you will create your own unique and innovative cloud solution, which later we will all want to use?

Bez kategorii

10 simple ways to improve your safety

10 simple ways to improve your safety

Security in IT is a very broad field. Applications and infrastructures are built in such a way that this aspect remains inseparable at every stage of the process. Sometimes improving security is the result of the right choice of parameters in the code, and sometimes – a thorough analysis and long preparations. Let’s leave it to specialists how engineers from different IT disciplines design their solutions, but remember that even the best specialists cannot protect us from our own mistakes or errors. We should all take our own security seriously because it also translates into the security of the entire system and then to other users. As users, we have very little power, but we can do very little to do it well. There are a few simple ways that we can implement right now to improve our security. In today’s episode we will list 10 of them

1. Update your applications and systems frequently

The first way to improve your security is simply to update your applications and systems frequently. This applies to laptops, smartphones and basically any device that offers such a possibility. With almost every update we receive a security patch. Sometimes even a small package of patches can make the difference between someone breaking into our device or not. We can say that software development is in a way a race. The race between the providers of a given service, who constantly look for weaknesses in their code and try to fix them, and the hackers who constantly look for the same weaknesses and try to exploit them. By ignoring updates, we make the hackers’ job easier because we give them more time to find vulnerabilities and find our device with that particular old version of software. This is definitely not the side of the race we want to be on. So I encourage you to update your computers, smartphones, IoT devices frequently, but also to update your routers and basically anything that gives you the ability to do so.

2. Make backups

The second way is rather not directly associated with security, but I assure you that it has a lot to do with it. It is about frequent backups of your data. Administrators say that people fall into two groups, those who make backups and those who will make backups. Putting aside the obvious sarcasm in that sentence, you have to give it credit. Most people don’t make backups until they are painfully aware of how important they are. Our laptop may be stolen, our hard drive may get corrupted or even we may delete our important data by mistake. In such scenarios, a backup saves us a lot of work. Moreover, a backup is extremely helpful if we fall victim to some ransomware. This is a kind of malware that encrypts our data with an unknown key and demands a ransom to decrypt it. If we have a backup of our data, we can just completely format our hard drive in such a case, so we will definitely get rid of the virus for free. After reinstalling the system, we will simply recover the data from the backup. Backing up eliminates a great many problems even before they arise.

3. Use a password manager

The third way is to use a password manager. There are probably many bad password habits to list, but my 3 favorites are: making up short and simple passwords, using the same passwords everywhere, and writing passwords on colored sticky notes and taping them to your monitor. A properly used password manager solves all these problems, although I admit that this last example is rather extreme. As an open source advocate, I use the Bitwarden application. It is a manager available for macOS, Windows and Linux, but also for mobile systems in the Apple and Android ecosystem. With such a password manager you only need to remember one password, the master password to open the manager. The master password is also the encryption key for the stored passwords, so it is necessary that it is strong enough.

If this is going to be the only, or one of very few passwords that we have to remember, I recommend making up even a paranoidly long password, say 48 characters. Once we have it, we can use the built-in password generator and use such generated passwords everywhere. We do not have to remember them. Just go to a page, start the registration and Bitwarden will offer to generate and remember a password, whose length and difficulty level we can even specify. Once you agree, it will begin to suggest a user and password for the site. What is important, if we accidentally land on a crafted page which looks identical to the one we registered on, Bitwarden will not suggest the password. Of course we will be able to pull it out manually, but it will be a very clear sign that something is probably wrong and we should think about whether we are being tricked. So I recommend the password manager right away because it will solve many problems, including some not so obvious ones.


4. Use two-factor verification

The fourth way to improve security is to use two-factor verification. Every time we log in to some service we should give a password and if the password is correct we should be asked to give an additional one-time password. We can get such password by SMS or write it down from application on a smartphone. However, I do not recommend using SMS because this type of communication is not encrypted. SMS are sent and received as an open text. One-time passwords are usually only valid for 30 seconds, but security is not about making hackers’ work easier. One-time passwords can be generated using various mobile applications. The choice of applications is really large, so I encourage you to browse the app store yourself and choose something that suits you. Just pay attention to the possibility to backup your codes. If our mobile device is lost and we don’t have a copy, we may have a serious problem with logging into our services. However, using one-time passwords gives a very high level of security. A hacker not only has to know our long and complicated password, which is already very difficult, but also has to have physical access to our mobile device. So I encourage you to review all the services you use and run such verification if it is available. Usually the activation process is simple and we are led by the hand.

5. Change your DNS

The fifth way, which is still not very popular nowadays, is to change your DNS. If we are using some popular DNS server, or the one our ISP suggested, we are very likely to be exposed to all the flaws in that system. DNS is basically a phone book of the Internet. One of its tasks is to translate domain names into IP numbers. So if I need to connect to first my computer will ask a known DNS server “what is the IP address assigned to the domain”. The DNS server will answer with the IP number. More about DNS you will find in the second episode of “IT in simple words”. Since devices on the Internet ask DNS servers for IP addresses all the time, because they do not know it themselves, why not block some addresses already at this stage? Let’s say some hacker created a page that looks identical to a page of some service we use. He did it only so that someone would enter such a page fully convinced that it is the correct page and give his login and password. The hacker will receive such warranties and will start using them on the right website to impersonate his victim. This method of phishing for passwords is called phishing. In 2021 alone, about 2.5 million phishing sites were detected. Quite effectively such sites are just blocked at the DNS level. That is, if for some reason my computer starts trying to connect to a phishing site, it will ask DNS for the IP of such a site. If DNS knows, that such website is phishing, instead of answering with server’s IP number it will answer “there is no such website”.

Many popular DNS servers simply respond to virtually any DNS query, even those that will lead you to spoofed sites. If you want to increase your security it is worth to change your DNS to one with a filter for dangerous sites. Of the free solutions, the Cloud Flaire DNS works well, but I personally prefer the paid, albeit very cheap NextDNS solution, because it also cuts off advertising networks. That is, if for any reason my computer wants to do something with some ad network, such as download an ad, or I accidentally access a phishing site, NextDNS will treat them as the same evil and respond that such a site does not exist. There are never any ads on my web browser precisely because of NextDNS. So I encourage you to change your DNS.

6. Encrypt your disk

Another method is becoming more and more popular. It is about disk encryption. If our operating system supports some native encryption technology, it is worth to use it. In the worst case it will force us to enter only one additional password. However, if this password is strong enough and someone simply steals our drive, the effort to decrypt such a drive will be so high that in practice it will be completely uneconomical. Breaking passwords is always a matter of time. If someone breaks a password you never know how long it will take. Maybe another 5 minutes or maybe a quadrillion years. By coming up with a strong password, we make the process take so long that such a hacker will only waste time and money on electricity bills because the process is quite energy intensive. MacOS users can look at the native FileVault technology, while Windows users can look at BitLocker technology to encrypt their drive. If for some reason the technology isn’t available or you just don’t want to use it, you might want to look at VeraCrypt technology, which works on every platform and is available for free. Note that if the login to our user on our laptop requires a password, it does not mean that the drive is encrypted. However, if the login password is also the key to decrypt our drive, then we can consider our data much more secure.

7. Use any technology that provides end-to-end encryption

The seventh method, which unfortunately may already involve the difficulty of convincing your friends to use it, is to use any technology that provides end-to-end encryption. As we remember from the previous post on the blog, end-to-end encryption is a type of encryption where only the sender and recipient of the message have access to the encryption key, not the service provider. Many people today already use a lot of Internet services on a daily basis. It can be said that our data, even the private ones are scattered all over the world and it is impossible to keep track of them all. Our data can physically reside on dozens of servers around the world. How do we know how secure the administrators of these servers are? How do we know if one of these servers has been hacked? In such a situation would we be informed about it? And even if we were informed, what if the data is already out of our control? We do not know the answers to many questions. For this reason, if you do need to send a file, consider using a secure instant messaging service such as Signal or Session. Emails can be sent encrypted with S/MIME or OpenPGP technology. If someone somewhere on the edge of the world breaks into a server that happens to be storing our data, they will only see a string of characters and some metadata. So it is worth to make it possible for him to read at least that. There are really many technologies and they concern not only the issues I mentioned above. I encourage you to browse website yourself. You can find there a lot of privacy friendly technologies. Many of them are technologies implementing end to end encryption as a standard. Many of them are also free.

8. Delete apps you don't use anymore

The eighth method is about organizing your apps. Each of us, when we buy a new laptop with an operating system installed or a new smartphone, will see that there are also several applications installed on it. When we run our devices this way, we often install various apps that we use for a while but then stop, or use once every six months. It is a good practice to routinely review your apps and delete any that you are not using. Applications can have millions of lines of code that we often have no insight into. And even if we did have that insight, there won’t be enough time to study it all. Always at some point we will have to trust the developers of a given application that the product of the company for which they work was at least satisfactorily tested for security. However, we can never be sure. Malware that can infect us can exploit vulnerabilities in our installed applications. For this reason, if we do not use an application, it is advisable to remove it to minimize the threat.


9. Don't let applications connect to camera/microphone if you don't see the reason for doing it.

The penultimate, or ninth, method also involves regular auditing of your applications. Once we have removed the unused applications, we should also review the other applications for their accesses. The rule mainly applies to smartphones. If any photo management app needs access to the Internet, photo directory and camera, it is safe to say that it should have such permissions. However, if the same application asks for access to the microphone, and does not have the functionality of recording video, we should seriously consider disconnecting such access. If any application has access to resources that we don’t think we need, or that we don’t understand, it’s a good idea to disconnect it right away. If it is really needed in the future, it can always be restored, but before we understand why, for example, access to location services for an application sending e-mail, I suggest to block such access. This rule comes from the fact that quite often as users we accept the rules of the application without reading them. Probably, if we read it, we would find out that, for example, the application is tracking us in such a way, or simply adding to our files the appropriate metadata. Due to the fact that we almost never have time to fully and thoroughly analyze the operation of an application, the safest thing to do is simply to limit trust. Remember that the metadata that the provider of a particular service obtains from us for tracking and profiling can also be stolen at some point even after years of storage on the provider’s servers. Do all providers encrypt their stored data? The question is unfortunately rhetorical.

10. Keep calm

The last method is no longer related to any technology. It concerns only our attitude. The rule is ‘keep calm’. Nowadays we are flooded with various phishing attempts. And it is we receive an email supposedly from our bank, in which the alleged administrator asks us for our password, because otherwise it will come to blocking our account. Or we receive an email that some service asks us to change our password immediately for any reason. There are many phishing attempts today. The only thing we can do to protect ourselves is to stay calm. Administrators never ask for passwords because they can overwrite them at any time, giving themselves any access to our data. Why would they ask for our passwords? Phishing attempts almost always have an element of time pressure. If we receive a message that if we don’t do something within 12 hours, something bad will happen, someone who sent us that message is just counting on us to let them scare us. Trusting that when we click on the link we actually do something innocent, we actually visit a website that gives our password to the hacker or download malware. I therefore urge you to treat every e-mail and SMS you receive with great caution. In such moments it is worth asking yourself “Isn’t the main purpose of this message to make me lose my guard?”. Nowadays, phishing for passwords relies more on social psychology and unfortunately no technical protection will help. So all that remains is our own caution and vigilance.

Bez kategorii

Is your data secure?

Is your data secure?

The development of technology has meant that sending some file or message is now possible literally in two clicks or two touches of a touch screen. We are able to send a piece of our virtual lives to the other side of the world within seconds. It’s so simple and commonplace that we don’t even think about what happens next with that sent data. Let’s stop for a moment and before we press the “send” button, let’s think if we and the recipients are the only ones who can see the data we sent.

Let’s assume that I want to send an e-mail to one of my friends. A simple action that each of us performs many times every day. I just write a few sentences, give the address, title the message and click “send”. Immediately after clicking this button the message is transformed into a set of packets and sent via an encrypted link to the mail server where I have my mailbox. Immediately afterwards the message is also sent through an encrypted link to the recipient’s server and from there it can be received by the recipient using some mail client. Note that in this whole process we have several encrypted connections, so that our messages can be safely copied between devices. That is, if we copy a message from server to server, or from server to our device, we know that as the packets travel across the Internet our data is secure. But what happens to the message stored on the server? After all, it is also stored there in case we want to download it to another device. And in this case it is stored even on two servers, the one with the sender’s mailbox and the one with the addressee’s mailbox. Is the message encrypted at rest? Well, if we did not take care of it ourselves, the email message on the server is stored as an open text. That is, as soon as the message traverses a section of the Internet and reaches some intermediate point, it is decrypted and stored in that form. A similar process takes place every time the message goes to multiple recipients. Then it is saved as an open text on each of the recipients’ servers. A very similar situation takes place if we write to someone using an instant messenger. Many popular instant messengers provide encryption only during message transmission and not during data storage.

Let’s consider another example. Suppose I have some files in the cloud. Regardless of whether I share them with a friend or keep them just for myself, quite often these files are also stored unencrypted. However, it should be mentioned here that popular cloud storage providers quite often convince us that files are stored with them in encrypted form. However, just as often they forget to add that the key to decrypt these files belongs to them. That is, perhaps if some data is stolen from their servers, none of the hackers will read our files, but the administrators of the service will be able to do it whenever they want. This approach creates very great opportunities for profiling and tracking users. In messages and files we can search for phrases that appropriate algorithms will be able to process. Our photos can be analyzed using artificial intelligence to label them accordingly. A service provider can use the data collected to serve us relevant advertising or simply sell the data to providers of other services. Despite the fact that no human probably watches our data, the company analyzing our data knows quite a bit about us.

As you can see, simply providing your users with an encrypted connection solves only part of the problem. It protects us from prying observers who are trying to view our activities on the Internet, but it does not solve the problem of data storage security. This is because in the scenarios mentioned above all encryption keys are held by the service provider. The way to ensure a much higher level of security is simply to reverse the situation. If we become the owner of our encryption keys and encrypt our data before sending it to the cloud, the provider will not be able to open our files. This way of securing data is called end-to-end encryption. It is more of a method than a technology where the owner of the file or message has the encryption keys and not the service provider.

Keeping the keys on one’s own is obviously connected with the necessity of taking proper care of the security of such keys or at least remembering a strong enough password. Additionally, the person with whom we will be corresponding must know how to use such technology. I assure you that it is not difficult. We do not have to know about cryptography to secure our data and messages. This is quite a simple activity, which takes seconds, but dramatically increases the level of our security.

What to do to make our encryption work really effectively? Let’s touch on two aspects, which are equally important, but they do not always have to occur simultaneously. First of all we need to take care of strong passwords. Nowadays, computers have so much computing power, that under favorable conditions they are able to make trillions of password cracking attempts per second. The password-cracking algorithm will simply try to use every possible combination of characters as a password. It will do this until finally some combination works to decrypt the data. The only valid combination is nothing more than our password. In practice, it can take less than a second to crack a password that is 8 characters long. Unfortunately, there is no unbreakable password, but there are passwords that would take billions or even more years to “guess” using this method. Passwords the longer, the better. For this reason, it is best not to try to come up with a password in the form of a random string of characters, because it will be difficult to remember. The best passwords are both long and easy to remember, so we should start thinking of passwords as whole sentences. For example, the password “e=mc^2toMyFavoriteEinsteinRules” has 37 characters, is a password that is definitely easy to remember, but nowadays practically impossible to break. I personally recommend passwords with a minimum length of 24 characters, which contain lowercase and uppercase letters, numbers and special characters. Strong passwords should be used everywhere without exception, not only in relation to our topic today. The second aspect of end to end encryption is understanding encryption keys. In fact, you only need to know that there will be two keys involved in the process. A public key and a private key. Both of these keys are simply text files containing very long strings of characters. We keep our private key only for ourselves and the public one we can share with our addressees. Preferably during an actual meeting and not via the Internet. Our addressees will also give us their public keys. When our friend wants to send us a message he/she will encrypt it using our public key which we shared with him/her earlier. We will receive such a message and will be able to decrypt it with our private key. Then we will reply to him and encrypt his reply using his public key. This is basically all the theoretical knowledge we need to have before we start encrypting data.

There are already many technologies to send files, messages, emails in end to end encryption standard. To encrypt an e-mail we can use for example OpenPGP technology. This technology is very popular and available for free. You just need to use a free generator to create a key pair, share the public key with your friends and start using your email client to encrypt your messages. However, it is important that if you decide to use this standard you must remember to transmit your key in a secure way. If you send such a key over the Internet and it is intercepted on the way, you may not only lose your security but also have a false sense of it, which is even worse. So remember to transmit keys generated by OpenPGP technology in a secure way. The second quite popular method of e-mail encryption, although not free anymore, is S/MIME technology.

In this case we no longer have to provide the keys ourselves. We pay for an external security authority to confirm the authenticity of our provided key to the person to whom we send it by e-mail. This process takes place automatically. If one of our friends uses an S/MIME certificate and sends us an email, we will see a padlock in our email client, similar to the one in the URL bar of a web browser when connecting via HTTPS. This will be a sign that the message has been signed with an S/MIME certificate and by receiving it we have also received the sender’s public key. An S/MIME certificate can be purchased for less than 50 PLN per year. However if we don’t feel like generating our own keys we can use services of email providers who have OpenPGP implemented as a standard. ProtonMail is one of such services. This Swiss service allows end-to-end encryption inside its servers using this technology. So if one of our contacts uses ProtonMail and sends us a message to our mailbox within this provider, the message will be encrypted. But if we want to send a message outside of ProtonMail’s servers, we have to share our public key in the same way as we would do it using OpenPGP. As you can see each method has its strengths and weaknesses. So the choice should depend on many factors, including what your friends are using.

It is also worth remembering that when choosing a technology, a very important factor is whether the technology is available under an Open Source license. If the application code is available for everyone to see, then of course it is also available for hackers. You can actually identify weaknesses in the application this way, but it’s important to remember that it’s not just people with bad intentions who get to see the code. The more eyes that can verify the code, the higher the chance to find bugs. For this reason, I very often use various open source technologies and recommend them to my clients. Also in the topic of instant messaging. For written communication, as well as audio or video calls with available end-to-end encryption in standard you can use Signal application. The application requires a phone number for activation, but is very user-friendly. Its functionalities are practically the same as those of most popular communicators. You should only remember to verify your contacts by scanning QR codes on your friends’ phones. Only then we can be sure that the keys have not been intercepted. Signal is mainly available for smartphones, but there is also a version for Windows and macOS. To further increase your security and even add the aspect of very strong anonymity, you can use some communicator based on onion routing. Session is one such application. It allows you to send text messages using the Lokinet network, similar to the Tor network described in the previous episode. Harnessing such a powerful tool as onion routing to communicate makes it extremely difficult to both eavesdrop on messages and simply identify who is talking to them. As of today, it is hard to find a more anomalous and secure text messaging exchange. Session has some drawbacks, however, which are very typical of any application that uses onion routing. The transfer of data is very slow and, also for this reason, the application does not currently allow you to make a voice call.

If anyone of you stores files in the cloud, I also encourage you to check out the Cryptomator application. With it you can quite easily encrypt your files in the cloud. If we already have some disk resources purchased from a provider, but we do not want the provider to have access to our files, we can store files encrypted with our key. All we need to do is to indicate to the application which directory on our computer is synchronized with the cloud, where to create an additional virtual disk, and come up with a strong password. From this point on we can use the additional virtual disk as any other, remembering that any file we put there will be encrypted by Cryptomator and in this form sent to the cloud.

The technologies listed in today’s episode by no means exhaust the topic of end-to-end encryption. There are many more possibilities and their choice should depend on your needs. But now let’s ask ourselves “If I even encrypt everything I can, will I become completely anonymous?” The answer to this question is unfortunately not that simple. Using the above technologies and many others, we will certainly dramatically increase the degree of security. Viewing our correspondence will become so difficult that in practice it will become impossible. Similarly, if it comes to stealing our files. If we have encrypted them using a strong password or key, we can pretty much assume that no one will read them in our lifetime, the lifetime of our great-grandchildren and many generations to come. However, even if we encrypt everything, some part of the communication will have to take place on principles allowing to transport the message. Sending even an encrypted message we have to address it to someone. Message address must be available to read, because otherwise it would be impossible to know where to direct this message. And when we encrypt an email, by any technology, the title of the message, the addressee and the sender are known to the service administrators. When we write a message on a messenger, and it’s not a messenger that uses onion routing, some sort of addressee identifier must be available to ensure proper transport of the message. If we can call the content of our message “data”, then the addressee, the sender or the sending time can be called “metadata”. Metadata is basically data that describes other data. They are usually much smaller than data, but they store critical information necessary to complete a service such as email delivery. Nowadays, encryption has become so popular and powerful at the same time that attempts to break the encryption have become unprofitable. Of course such attempts are still made, but gathering a big enough base of “metadata” gives as much information about us as the content of our messages. How often we talk to someone, how long, how many times a day, or at what times of the day says a lot about the type of relationship. The places we visit depending on the day and time are a unique identifier of our person. As far back as 2004, NSA Chief Counsel Stewart Baker said that metadata says absolutely everything about us. If we have enough metadata about someone, we don’t really need the data, or in this case, the content of our messages. Onion routing, thanks to its decentralized architecture, allows data to be sent with a minimal amount of metadata, which is additionally scattered and difficult to collect and associate. However, email, text messages, phone calls, and more remain viewable by administrators and government departments. End-to-end encryption technologies are undoubtedly a great step in the right direction, but they don’t always ensure complete anonymity, as hiding metadata in many issues is still the song of the future. And before that happens, let’s remember how much our metadata says about us. Gen. Michael Hayden, former director of the NSA and CIA said back in 2004 ‘we kill people based on metadata’. As you can see, metadata says so much about us that some government organizations will not hesitate to make a final decision about someone’s life based only on metadata.

Bez kategorii

How to stay anonymous on the internet?

How to stay anonymous on the internet?

When you take an onion in your hand and remove one layer from it, there will be another layer underneath, and another layer underneath. Granted, each successive layer of a real onion gets smaller and smaller, but the onion we’ll talk about in this episode is different. Each successive layer looks identical and is the same size, and in the middle, under all those layers that only a special key will remove, is a message. How many layers do I have to take off to read the message? Only the author of the message knows.

Onion routing was first implemented in the 1990s at the United States Naval Research Laboratory. Its authors Paul Syverson, Michael G. Reed, and David Goldschlag aimed to develop a network protocol that would provide strong protection for U.S. intelligence communications. The project was further developed by the Defense Advanced Research Projects Agency (DARPA for short) and in 1998 the protocol was patented by the US Navy. In 2002, computer scientists Roger Dingledine and Nick Mathewson joined Paul Syverson and built on the existing technology to create the best-known implementation of Onion Routing, first called the Onion Routing project and then Tor. The Naval Research Laboratory released the Tor source code under a free license after some time, after which Dingledine and Mathewson, along with five others, founded the non-profit organization The Tor Project in 2006. Today, the Tor protocol (or The Onion Router for short) is available for free with its source code. There is also a Tor Browser that uses the Tor protocol, and provides some additional functionality to make it even more difficult to track a user on the Internet.

Tor is not the only onion routing protocol, however, it is the most popular. Talking about onion routing in this episode of “IT In Simple Words” I will limit myself to Tor only. The topic is as interesting for technical reasons as it is controversial for reasons of how Tor can be used. It is a powerful tool, which in the wrong hands can do some damage. I will return to this topic in the second part of this episode.

Let’s assume that I want to connect to Normally, this communication will occur through different devices, but both sides of the communication will know quite a bit about the other side. Google will know who my internet provider is, my internet provider will know that I use Google services. My ISP is unlikely to know how I use those services. The whole process of connecting to will be fast, will take the shortest route and will be secured with encryption thanks to the certificate I will get from However, if I decide to use the Tor protocol for this process, it will be different. Neither my ISP will know that I’m using, nor will Google corporation know who I am or who my ISP is if I don’t explicitly sign into their service. Additionally, even if somewhere along the communication path between my laptop and there is some nosy administrator or hacker, all they will know is that someone is using Tor on that particular connection. Sound unbelievable? Let’s explain the mechanism behind it.

Tor is not only the protocol itself, but also an extensive community. Anyone can become part of this community and share their link. This is also what happens. Users from all over the world set aside a piece of their resources to expand the Tor network. In this way, their computers become the so-called “nodes” of the network. Nodes scattered around the world are a major part of Tor’s strength, because the communications passing through these nodes can be completely different from moment to moment, making it very difficult to track down users of the network.

Sometimes, when trying to connect to, the connection will go through Italy, USA and South Africa, only to be directed to Google’s server room. However, I may decide to create a new connection and this time it will be mediated by nodes in, say, Germany, Canada, USA, to finally direct the connection to Such a connection using additional nodes is called a “circuit” or “chain” in Torah terminology. Also, the nodes in a circuit don’t work like regular routers that just forward packets to the next address. When I make a new connection to the Tor network, cryptographic keys are automatically generated with which I perform encryption. By default Tor requires 3 intermediate nodes, so let’s assume that’s also the case for me. Since I know that 3 intermediate nodes will be involved in the connection, I use three keys and encrypt my message to three times using them, one by one. I send my message to the first node. This node knows only one of my keys, so it is able to decrypt the message. We can jargonally say that it takes off the first layer of ciphertext. It reads the message and notices that it is unable to do anything with it because it is still encrypted. However, it can route it to the next node. The next node performs the same operation again. Since it knows the key to the second layer, it removes it and again sees that it is of no use to it, because the message is still encrypted. So it directs the message again to the next node. The third node receives the message, decrypts it with the third key. This time the message is clear and reads “connect me to”. The node makes the connection and when it receives a reply, it re-encrypts it with its key and forwards it to the Tor node from which it received the message earlier. The next node encrypts the message with its key and generally the whole process just happens in reverse. Eventually, when I get the reply and it is encrypted with three keys, I will be able to read it because only I have all of them. Throughout this process, each node only knows part of the circuit. That is, the first node only knows who I am and to whom it should direct the message coming from me, but it has no idea what is in the message. Even if it decrypts it with the key it has, there are still two layers of ciphertext left. The last node, the one that already routes the connection to actually knows what page is being visited, but it doesn’t know by whom. It only knows from which node it itself received such a request. Because of such multiple encryption and the creation of circuits sometimes going all over the world, tracking down a particular user on the Internet is an extremely difficult task. One could, of course, try to eavesdrop on the traffic directly out of my laptop and directly between the end node and This would be a rather breakneck task since nodes are always randomly selected, but it is indeed theoretically possible to correlate the two connections and link them together. However, you have to remember that Tor nodes don’t serve only us. There’s a lot of network traffic between them and all anyone can see is encrypted messages. It’s hard to tell which packet came from us because it gets lost in the crowd of millions of packets per second. In addition, you never know at what stage a message is. It may be encrypted with only one more layer and may be encrypted with even more than a dozen, if the user wishes so. In practice, tracking down someone using Tor is possible, but breakneckly difficult. All this anonymity is not entirely free, however. Due to the multiple routing of connections to random Tor nodes, the speed of such a connection is much slower and decreases as the number of nodes in the circuit increases.

However, there is also a dark side of onion routing that is hard to pass by. The Tor protocol not only allows for anonymity for ordinary users, but also for servers. Since everyone who appears on the Tor network connects via several intermediary nodes, they remain unknown. A similar mechanism can be used by a server, and just as I connected to in a way that Google doesn’t know who I am, Tor can also make me connect to a server that neither I nor Google corporation will not be able to locate. What’s more, even if I connect to this server, it won’t know who I am. We are talking here about the so-called Dark Web. However, the proper name is different. The service that is available only in Tor network should be called simply “hidden service” or “hidden service”. These hidden services are where illegal content is hosted, illegal goods are traded, and so on. However, the use of hidden services itself is not illegal. Anyone can decide to host their own HTTP server as a hidden service.

The process of connecting to a hidden service is quite complicated and its details are far beyond the scope of this podcast. So let’s limit ourselves to the two most relevant facts about such a connection. The first fact is that in order to use such a hidden service, we need some specific address that we type into the URL bar. In the same way as we always type in addresses with the suffix “com”, “pl” or other, hidden services use addresses with the suffix “onion”. Such addresses are usually different from normal addresses available through DNS, because they are strings of characters, which are derivatives of public keys of hidden services. If you enter an address with the suffix “onion” into the URL bar of a normal browser, you will get an error because the DNS does not know about this top-level domain. Only the Tor Browser will be able to make a valid connection. Such addresses are not published anywhere and are also not available in search engines. The second fact about connecting to a hidden service is that once the circuit is set up, there are at least 6 Tor nodes between us and the server. This is due to the implementation of the protocol. Both the client and the server hide behind at least three nodes, responsible for successive layers of encryption. In the case of a connection to a hidden service, we have the sum of these nodes, so that neither the server nor the client knows anything about the other side.

As you can see onion routing is quite controversial. On the one hand, this technology helps to protect the privacy of Internet users, but on the other hand, it creates a large room for abuse. Every once in a while we hear media reports about the CIA shutting down some site where drugs, weapons or other illegal activities were being traded. Almost every time, this causes a stir and the return of the debate about whether Tor should be outlawed. Everyone is entitled to their own opinion on the subject, but I will quote the words of one of Tor’s creators. In 2017, the aforementioned Roger Dingledine spoke at a conference in Berlin. The theme of the conference was “Will Freedom Survive the Digital Age”. Among other things, Roger talked about Tor’s hidden services, and I’ll paraphrase an excerpt from his talk: “We recently tried to verify what percentage of traffic through Tor is actually related to hidden services. It turned out to be 2-3%, which means about 98% of users use Tor to visit regular sites like Twitter, Google, Facebook and only a few percent visit hidden services. So the next time you see a drawing of an iceberg in a BBC article where they scare you that you only know 4% of the Internet and the other 96% is the Dark web, think about what the purpose of that was.”

Bez kategorii

Does the VPN provide anonymity?

Does the VPN provide anonymity?

It is probably no longer news or surprise that we cannot feel anonymous online. Even if our activity is not directly linked to us through social network identifiers, we can still be identified in other ways, and there are several of them. Our Internet provider knows exactly which websites we have accessed and which services we have used. Some of you will say that you have nothing to hide, others will say that they are outraged by the lack of privacy. Regardless of which group you belong to, you should be aware that there are technologies that are quite quickly associated with privacy and anonymity. Let’s think whether they actually provide it.

Virtual Private Networks, or VPNs. If we started to ask random people we met on the street what they think a VPN is, we would probably hear answers such as “a secure network that hides your IP number”, “a method of connecting to the office” or “a network that allows you to be anonymous”. And while each of these answers carries a bit of truth, the lack of full context means that we often get the wrong idea about what a VPN actually is and is not. Let’s systematize this knowledge so that after listening to this episode everyone could answer the question whether a VPN really provides anonymity and privacy.

Let’s start with what a VPN is. We can imagine VPN as any other client-server service. Our computer is the client and the server is simply another computer that waits for connections from clients. The server additionally verifies the clients that are trying to connect and provides some additional functionality. From a VPN application standpoint, we use the service just like many others. We simply connect to the server. However, the effect of such a connection is unique in its own way because we gain access to an additional network through it. From the point of view of our device it will be basically the same network as any other. But there are two aspects that make this network special. First, the secondary network, will be able to physically run through any number of intermediaries, such as our ISP, numerous autonomous systems or intermediary devices, but from our perspective as a user it will appear as a network directly connected to our device. In other words, the network to which we connect may be physically separated from us by multiple networks, but to us it will be visible as if we were directly connected to it. The second aspect of such connection is encryption. All our activity that will take place in this additional virtual connection will be encrypted in both directions of communication all the way between the server and our client. We say that such a connection that creates an additional virtual network, and in this case also encrypted, is what we call a “tunnel”. We say “tunnel” because in such a tunnel we are able to hide our communications, even if the websites we are communicating with do not themselves provide encryption. However, I must point out that the tunnel is only present between the server and the client. So if the server passes our connection somewhere further, we should remember about proper security measures, like TLS. VPN starts where the server can forward our connections to some trusted network. And this brings us to the essence of VPN applications. VPN is most often used as a gateway enabling connection with office or some remote infrastructure.

In order to systematize our knowledge about VPN and better understand how it works, it is useful to understand the general mechanism of connecting to this network. The mechanism is quite similar to the “handshake” I described in the previous section, however, it is even more restrictive. In this case, both the client and server have their own cryptographic key pairs. That is, each side has its own public key, which it shares with the other side, and its own private key, which it keeps just for itself. Each key from the pair works in such a way that if we encrypt a message with one of them, we can decrypt it with the other key. That is, if we encrypt a message with the public key, only the owner of the private key can read it and vice versa. Due to the fact that the private key is never shared and the public key may be publicly available, encryption with the private key is generally called “signing”. If someone “signs” a message with their private key, someone in possession of the public key can always verify that the message was actually signed with the correct key. In the process of connecting to a VPN, the client and server exchange their public keys. It is as if, for example, the server sent me its public key saying “here’s my public key, encrypt with it the messages you will address to me, because only I will understand their content”. Additionally, both sides of the VPN communication have a certificate from a “certificate authority” so that they can verify each other’s identity. Both parties can be sure that the public key they received from the other party over the Internet actually belongs to the other party and was not intercepted somewhere along the way and swapped out for some crafted key. Once both parties have verified each other correctly, the client generates a random string that is encrypted with the server’s public key. The string is used to create a shared session key. From then on, both the client and server use the shared key for encryption. Such a session creates a “tunnel” through which messages only encrypted with the key just established pass. The server also becomes a router, meaning that every connection passes through it and is forwarded if necessary. Commercial VPN providers operate in such a way that they allow all their traffic to pass through their servers. For example, if we connect to a server in Madrid, we can see in the web browser that the web pages are in Spanish. This is because the server in Madrid is now our router and the website we are connecting to has automatically matched the language to the IP of the sender. When we are connected to such a VPN server in Madrid, our ISP only knows that we are using a VPN server that stands somewhere in Madrid. It does not know anything else. However, our entire communication is already known to the VPN provider. So in effect we simply made our Internet activity known to someone else. What is worse, the VPN provider often requires registration, which makes it easier for him to associate our person with a particular activity. We definitely do not gain anonymity this way.

Now, let’s tell you what a VPN definitely isn’t and what it doesn’t do. Sometimes you may come across “I use a VPN, I’m safe” opinions. The very statement “be safe” sounds frankly a bit funny in the context of computers, but let’s try to list some of the most popular myths. VPN does not protect us from malware. It is really just a method of passing connections through an encrypted tunnel. It has nothing to do with scanning or blocking any software. If you use VPN to connect to some suspicious site or receive an email with a virus, the effect will be the same as if you did it without VPN. Another myth concerns anonymity. VPN does not provide it. Since the VPN server plays the role of a router, it overwrites the IP number during routing. The destination server to which you connect via VPN will actually see the IP number of the VPN server. However, it is worth asking yourself – so what. The vast majority of Internet users do not have a public IP number, and every time an Internet user wants to connect to someone, his IP is repeatedly overwritten by routers that stand in the way of his ISP’s server room anyway. In such circumstances, the target server from which we want to hide the IP will rather see some IP belonging to our ISP anyway and thus will not associate our person with this activity. It can, however, associate our person through the cookies that our web browser stores. If we have cookies stored in our browser that are designed to identify clients, the destination server will know perfectly well whether we are connecting via VPN or directly. Besides, if we decide to log in with our personal credentials (user and password), all anonymity ends.

After all, we made an explicit login and the server knows it’s us, whether through VPN or directly. I have also come across the statement that VPN is illegal. This is absolutely false. Countless companies use VPNs to share their resources with employees working remotely. VPNs are also heavily used as site to site connections i.e. between two centers that are geographically distant from each other but need to remain in constant communication. Perhaps this stereotype was born when someone used this technology for the wrong purposes.

Now that we know that a VPN does not really provide anonymity, let’s consider what we can use it for. When does it make sense to use it? The most natural example of VPN application is remote access to the office network. Let’s assume that in the office we have a file base, a database, and several computers. We want to be able to connect to each of these components remotely. If we do not have VPN, each component should be somehow made available and visible on the Internet. It is connected with additional configuration of services, router and what is the worst, with additional risk that someone will find vulnerabilities and use them to attack our infrastructure. In such a situation it would be better to leave all the services running in the office to work only within it, but make the VPN server available to the Internet as the only service accessible from the outside. An authorized VPN client could log in, go through a restrictive authorization process and, after it is completed, use the services located in the office as if they were physically present there. This would, of course, require configuring a VPN server in their office.

A VPN can also be useful when using some untrusted network. Have you ever connected to wifi in a coffee shop and not been asked for your password? A VPN is a pretty good option when using a network that you don’t fully trust because it provides encryption at some distance. If someone at the coffee shop has crafted a Wi-Fi access point and we connect to it, we could be eavesdropped on. A VPN will provide us with encryption even if the connection is transmitted in open text. Again, keep in mind that VPN encryption only occurs from the client to the server and not beyond. So if we are not sure whether we can take such a risk, it is better just to refrain from it.

So is it worth using a VPN to feel more anonymous and secure on the Internet? No, because a VPN was not created with anonymity in mind. We can, of course, hide our communications from our ISP, but it will then be transferred to the VPN provider. If we don’t trust the ISP, why should we trust the VPN provider? We will get more by changing our habits, to more careful use of the Internet. VPN can only serve as an addition here.

Is it worth using a VPN to access remote resources? Absolutely, VPN was created to create a secure tunnel that allows you to use not just one, but many resources of a remote network. In a way, it simplifies IT infrastructures, because instead of making many services available on the Internet, it allows you to make one service available, which in a way makes it possible to make them all available in bulk and in a quite safe way.

The IT world abhors a vacuum. When a need arises, a tool appears quite quickly. This feature is particularly characteristic of Open Source. In fact, in building IT infrastructures, the problem is not the lack of tools, but finding and choosing the right tools for your needs. VPN is definitely a great tool for connecting to networks where we have some hidden services. However, for anonymity, there are much better tools, which we will talk about in the next episode.