- General introduction
- P2P & Go
- General Introduction
- Cryptography & Go
- Project introduction
- Berty & Go
- Paris P2P
- Meetup presentation
- Paris P2P Festival presentation
Security of Wireless Devices
When wireless technologies were still nascent, they didn’t pose too many security risks. Potential weak points weren’t yet discovered and even when wireless devices began to be widely introduced, the risks were relatively small. Nowadays, however, that has notably changed.
The value of information and the prospect of capitalizing on unsuspecting victims have drawn attention from malicious attackers. The previously unexploited security flaws of wireless technology are now under constant siege by potential intruders.
The concern that many people have is the idea that once they’re transmitting wirelessly, what’s to prevent someone from “listening in” on the transmission.
Wireless Communication Technologies
Many wireless networks are commonly used all around us, and each has its advantages and shortcomings when it comes to security. Here’s a breakdown of how they stack up in terms of security and some familiar use cases for each.
Our constant companion in the modern world. Wi-Fi is present in some shape or form almost everywhere around us. And with the United Nations declaring internet access a human right, it will probably be as ubiquitous as electrical power in the near future. However, concerns about this technology have hounded it since its inception.
On the whole, Wi-Fi is safe when used with the proper precautions, but there are also many situations that expose us to threats. In principle, Wi-Fi is similar to other technologies in that it consists of a radio frequency transmitter and an RF receiver.
Using a private Wi-Fi network in your home isn’t risky, but open, public, and customer Wi-Fi networks are not — generally speaking — very safe. Open Wi-Fi networks, in particular, are stomping grounds for malicious attackers. Open networks are offered by some businesses but they should be avoided, if at all possible, because there is no way to make sure no one is intercepting data.
Whenever possible, use an ethernet connection over Wi-Fi to reduce the risks associated with the technology.
Home security cameras are a case where Wi-Fi security is crucial. If the connection between the camera and the Wi-Fi router isn’t safe, attackers can easily access the camera feed. It’s recommended to use the latest encryption standards (WPA2 with AES) on your router and choose strong passwords. Also, buying home cameras from reputable sources such as Wyze Labs will make sure they have the latest security features.
3G is the third generation of mobile wireless technology. It represents a significant upgrade to the standards used in 2G networks. The increase in transfer rates made it possible for many new applications and services that weren’t feasible on slower networks.
From the outset, 3G (and for that matter, 4G) had relatively weak encryption. The glaring weakness of these networks is that their encryption only exists from the device to the base station. Once the data reaches the wired network, there is no encryption.
Now, that doesn’t mean that it’s unsafe, but if an attacker is motivated enough, they could gain access to that unencrypted data. But, most of the applications that you use are likely to have end-to-end encryption, so the main potential threats are phone calls and text messages.
What’s more, even for the secured data, encryption protocols are not very secure. The A5/2 encryption method, which most 3G transmissions use, was cracked within a month of the technology being released.
Whenever you don’t need access to your network, consider putting your device on airplane mode. That way it won’t send or receive any information and it’s practically shielded from most attacks.
Bluetooth is the standard for short-distance wireless devices. It was conceived in the late 80s to be used in the development of wireless headsets for mobile phones. The technology was quickly adopted for many different uses and continues to be a popular choice.
Like all wireless technology, Bluetooth transmissions are vulnerable to remote attack and spying. However, the security of a Bluetooth link will depend on the protocol being used. Different devices may use different Bluetooth standards and therefore are more or less prone to security breaches. The current standard is Bluetooth 5 but most devices are still using older standards.
The Apple Watch, for instance, uses Bluetooth Low Energy technology. This particular standard is easy to extract information from, but Apple uses a series of privacy protection measures that make it difficult to get any useful data. For instance, Apple products switch their Bluetooth LE address every 15 minutes. This prevents a snoop from getting any accurate data about who owns the device.
In contrast, other fitness trackers — such as the Fitbit — use a fixed address value. Since this value is unique and unchanging, it’s trivial to recognize and track the user via their device. Most Bluetooth LE devices constantly transmit advertising packets, which lets other devices know they’re present.
These packets, however, can be intercepted by any device and while they don’t grant access to the transmitting device, they do carry some identifying information. A good strategy is to only sync fitness trackers at home to prevent access to your data while you’re in public.
Another risk involves Bluetooth keyboards. In theory, wireless keyboards should be encrypting what they send to the receiver. So that even if someone were to have access to the data, all they’d see is an encrypted mess of data. However, in practice, most keyboard manufacturers use weak encryption protocols or, in some cases, none at all. A cybersecurity company looked into this in 2016 and found that eight major Bluetooth keyboard manufacturers used little to no encryption in their products.
This doesn’t mean that Bluetooth keyboards aren’t safe, in fact, Apple’s keyboards boast some of the best Bluetooth encryption out there. But most of the security is going to be from the pairing process.
That’s true of most Bluetooth devices. While some identifying information may be retrievable, the actual data they transmit is hard to access unless the device is allowed to pair up with a receiver. That goes for Bluetooth headset, mice, and other peripherals as well.
Radiofrequency Identification has been around for a very long time. Its most recent incarnation is RFID tags. These tiny devices are essentially dormant until they come into close proximity to an RFID reader. The reader provides the power necessary for the tag to transmit its data and the reader can then receive it. It’s simple enough but not terribly safe.
Things like RFID-enabled passports and credit cards pose a concern for many people. Sure enough, it has been demonstrated that RFID “skimming” is not only possible but quite easy to do. Because the RFID tag doesn’t discriminate about who receives the information, it will provide it to any reader that requests it.
In practice, very little RFID crime is reported, but the potential is always there. Experiments with directed antennas have shown that it’s possible to read RFID tags from up to hundreds of feet away.
Newer generations of RFID credit cards and passports are beginning to use encrypted data which will make it a lot harder to access the information. A good way to reduce risks is by using RFID-blocking wallets or “faraday” bags. These prevent any radio signals from reaching the devices inside.
Remote Keyless Systems in Cars
Many cars use this system, which does everything a standard car key can but without physical contact. This includes entry to the car and keyless ignition. It started to see use in the 80s and today most cars have, at the very least, keyless entry.
It's a simple radio transmitter that sends a coded signal to a receiver in the car that is tied to that specific transmitter. The transmitter has to be paired with the car’s computer, which is usually only available to dealerships and manufacturers.
The vast majority of modern keyless systems use rolling code. This basically means that every time the key fob is used to activate a function in the car, a different code is sent. This prevents anyone from scanning for the code to gain access to the vehicle. Using the same code again will not work. The remote control and the receiver use an encrypted system to share codewords.
These systems are still vulnerable to a specific kind of attack. A device can “jam” the first code used to unlock a vehicle and record it. When the vehicle owner tries again, the device will allow that code through while retaining the first one for future use.
Keep Up with Security
Wireless technologies will always be susceptible to attacks. These are only some of the most popular ones in use, but more are being developed every day. While it falls outside of the scope of this article to describe every technology in detail, you should take it upon yourself to learn the best security practices and habits for your devices.
Information is the most valuable currency and keeping yours should be a top priority. Keep your wireless devices safely stored when not in use. When they are in use, do everything you can to minimize your exposure to threats. An ounce of prevention is worth a pound of cure when it comes to wireless security.
Peer-to-peer sharing was a feature of the defunct ARPANET of 1969. As technology advanced, so did the government and entertainment industry giants’ efforts to suppress file-sharing.
However, P2P has survived well into the 21st century and it seems that the best is yet to come for the P2P community. Numerous new technologies are springing up and innovations and improvements are constantly being introduced.
Crash Course on the History of P2P
File sharing began back when the first computer networks were introduced. The ARPANET allowed users to send and receive files directly – that was back in 1969. One of the earliest transfer protocols was FTP (file transfer protocol). It was introduced in 1971.
In 1979, Usenet was born. It was primarily made for dial-up technology, but it made its way into the internet more than a decade later. Users could exchange files on bulletin boards. The video game Doom first became popular on bulletin boards in the early 1990s.
Two decades later in 1999, Napster was created, and with it, the modern era of modern P2P file sharing. Napster used a centralized indexing server, which would prove to be its downfall. Almost immediately after its introduction, Napster experienced a meteoric rise in popularity. By 2000, it had more than a million users. The next year, Metallica sued Napster and by the July of the same year, the service was shut down.
One year after Napster’s inception, Gnutella led a new wave. Unlike its predecessors, Gnutella was decentralized and allowed more people to use the platform at the same time. LimeWire is perhaps the most famous Gnutella client.
The next big step in the development of P2P file sharing happened in 2001 when Bram Cohen introduced Bittorrent. This platform is still in use today, one of the oldest and most widely used P2P protocols.
Bittorrent introduced a host of innovations. Users could search for files on online sites that contain trackers, while the file sharing happened directly between the users. This significantly improved transfer speeds. Additionally, Bittorrent clients would break a file into small fragments for multiple hosts, thus increasing the download speeds tremendously.
Bitcoin was introduced eight years after Bittorrent and it’s still in prevalent use today. Though it wasn’t designed for P2P file sharing, it brought about a new generation of P2P storage frameworks. It is based on blockchain.
Blockchain is so named for a constantly growing list of connected blocks. Each block or record contains data, a unique hash number, and the previous block’s hash. A blockchain is automatically updated every 10 minutes and uses a decentralized P2P network which anyone can join.
IPFS (InterPlanetary File System) network and protocol were introduced in 2015. IPFS is the next step in P2P file sharing that works similarly to Bittorrent and other torrent protocols. Users can download as well as host content. There is no central server and each user has a small portion of a data package.
It is also similar to Blockchain in that it uses connected blocks protected with hash numbers. Also, the data within IPFS blocks can’t be easily manipulated without changing the block’s hash. However, IPFS does support file versioning.
Ether is another popular P2P sharing platform based on blockchain technology. It is somewhat similar to Bitcoin; Ethereum is the name of the cryptocurrency used on Ether network.
Ether was launched in 2014 as an open-source platform. You can use it to anonymously make transactions and share data with other users.
Similar to some other advanced Blockchain networks, Ether uses Smart Contracts. These are protocols designed to facilitate the execution of transactions by cutting out the middle man.
The Start of P2P
The initial vision of Tim Berners-Lee, regarded as the inventor of the World Wide Web, was for the internet to be similar to a P2P network. He envisioned the internet as a place where all users would and should be active content contributors and editors.
Its early precursor, the ARPANET, allowed two remote computers to send and receive data packets. However, it wasn’t a self-organized nor decentralized file-sharing system. Additionally, it didn’t support content and context-based routing.
Usenet addressed many of those issues, continuing and evolving the idea of a free internet.
The Continued Appeal of P2P
Nowadays, thanks to advanced technology, P2P networks can offer much more than content and context-based file searches. Some of the top reasons for using and improving P2P platforms today include:
- Anonymity and privacy. P2P networks allow users to remain anonymous and protect their privacy on the network.
- Cooperation and resource sharing. Many are drawn to P2P networks for the cooperation and sharing of resources.
- Trust and accountability. Modern P2P networks are largely based on trust and the transactions have to be community approved.
- Decentralization and lack of censorship. Today’s P2P networks are decentralized, thus preventing almost all forms of censorship. This ensures network neutrality.
- Data integrity and encryption. Blockchain introduced hash numbers and proof-of-work. The latest innovations include encryption and smart contracts.
The BitTorrent protocol remains popular even as almost two decades have passed since its introduction. It faced many adversities throughout the years in the form of more modern and advanced P2P platforms, poor business decisions on the part of its creator and his associates, and countless legal problems, even with the US government.
What kept BitTorrent alive all this time is the fact that it’s decentralized, easy to use, and built for easy transfers of huge amounts of data. Other than that, Facebook, Blizzard, and Twitter have openly admitted to using BitTorrent. Most importantly, the values of sharing and cooperation among BitTorrent users kept the flame burning through the dark times.
P2P Hall of Fame
Here’s a list of some of the most important people in the history of P2P sharing.
- Tim Berners-Lee, inventor of the World Wide Web.
- Sean Parker and Shawn Fanning, founders of Napster.
- Bram Cohen, the mastermind behind the BitTorrent protocol.
- Gottfrid Svartholm, Fredrik Neij, and Peter Sunde, creators of The Pirate Bay.
- Satoshi Nakamoto, creator of Blockchain technology.
P2P is starting to gain traction in the outside world. More and more people are adopting and incorporating the rules and ideas that govern P2P file-sharing technologies into their lives. This is especially true of self-organizing communities that sprung up in recent years.
Self-organizing communities share a number of values and principles characteristic of P2P technologies. They might be appealing to a wide range of individuals and groups, most notably those interested in cooperation and resource sharing, proponents of decentralization, and the occasional anarchistic souls.
What works well
- having a monorepo
- being protobuf-first and generating a lot of code
- the codebase was “big-refactor”-friendly, including several refactors that modified 50+ files at once
- we’ve learned a lot about:
- our project, the features, the roadmap, the difficulties, etc
- about our dependencies (IPFS, gomobile, react-native, BLE, etc.)
What needs to be improved
- The code was too complex to read
- The codebase was too complex to update safely
- There were not enough rules about:
- where to implement something, how to name things
- how to implement things
- Makefile rules, and CI can be improved
- The tests should be more reliable
- We need to learn more about our future protocol, for now, it’s only in our head, and we will undoubtedly fail to implement the v1 of the protocol, I prefer to fail fast!
Several blogposts, slides, repos, and videos later…
I passed the last three days reading blog posts, slides, repositories, and watching videos about what other people are doing right now.
Then, I looked back on Berty and my other projects and listed a set of rules I like the most.
As usual, a rule is something that can always have exceptions :)
- Focus on readability, it’s a very good pattern to check what the godoc looks like to know if the API seems easy to adopt.
- Avoid magic, no global vars, no
- Sharing logic / reusable business functionality is most of the time over-engineering
- Enumerate requirements at function constructors. Use dependency injection (not dependency containers!), make
go buildyour best friend; the logger should also be injected
- If your project is small enough, put everything at the root of the project -> mono package
- When you are creating a very powerful and complex library, it can be a good thing to make its little-sister library that will wrap the most permissive one in a light opinionated library
- Embrace middlewares to lose coupling for timeout handling, retry mechanisms, authentication checks, etc
- Reduce comments, focus on useful variable and function naming
- Function and variable names are important to review
- Limit the number of packages, the number of functions, the number of interfaces
- Keep things simple and do not split into too many components at the beginning, split only because of a problem, not because of an anticipation
- Try always to have a minimal indentation level
- Use short function and variable names
- Variables can even be one or two letters long (initials) when used near to their initialization
- Receiver name should always be 1 or 2 letters long
- Prefer synchronous functions to asynchronous ones, it’s easy to make an asynchronous wrapper over asynchronous function, not the opposite
- Use named results (
return) for documentation
- Be flat, only use
pkg/for packages you want other people to use, and
internal/for the code your implementation details; most of the code should start in
internal/before being moved to
pkg/, only after you are sure it can be useful for someone else and after it becomes mature enough, so it has less risk of changing.
- use feature-flags to configure the app, feature-flags are “documentation”! They also allow you to have (multiple) (unfinished) (long-running) experiments merged more quickly
- Flags should be taken into account in this order: CLI > config > env
- use a structured logger, bind it with std logger (https://github.com/go-kit/kit/tree/master/log#interact-with-stdlib-logger)
- If your repo uses multiple main languages, they should be namespaced in their directory to make everything easier to manipulate for the tools.
- Put your .proto files in an
api/directory, but you can configure them to generate files in your existing go packages.
- Go routines
- Should always have a well-defined lifecycle
- You can use https://godoc.org/github.com/oklog/run
- Look at those patterns: Nursery, Futures, Scatter/Gather
- Package names should be:
- the same as the directory name (always)
- singular, lowercase, alpha-num
- unique in your project; unique with go core packages too, if possible
-racewhen building and testing from the beginning
context.Valueis only used for request-scoped information and only when it can’t be passed in another way
- Do not hesitate to pass
context.Contextas the first var of most of your functions (I need to investigate more and have a more strict rule here)
- Always put a
doc.gofile in the
pkg/*packages to configure the package vanity URLs and put some documentation. When your package has multiple go files, it will be easier to know where to edit those things
- Avoid having too many interfaces, and when doing some, try to always declare them in the caller package, not the implementer one
go testshould always work after a fresh clone! If you have unreliable/specific tests, use flags, env vars
- The tests should be easily readable and explaining, it’s probably the best place to “document” the edge cases of your library
- Use table-driven tests a lot
- If you are manipulating test-fixtures often, you can add a test
- If you write mock, they should be implemented in the same package than the real implementation, in a
testing.gofile; a mock should, in general, return a fully started in-memory server.
- If you need to write tests at runtime, you can use http://github.com/mitchellh/go-testing-interface
- If you have a complex struct, i.e., a server, do not hesitate to add a
Test boolfield that configures it to be testing-friendly
- When testing complex structs, compare a string representation (JSON, or something like that)
- Only test exported functions; unexported functions are implementation details
- If you write helpers, they should not return an error but take
testing.Tas an argument and call
- Most of the rules defined here can be skipped entirely in the
internal/directory. This directory is the perfect place for things that changes often.
- Add a githook that run
- When is it better to have a
ListAllUsers() + ListUsersByGroup() + ListActiveUsers()...?
- What the best way of organizing code that involves multiple languages, i.e., bridges?
- When does it makes sense to have an
- When does it make sense to have a
modelpackage vs. a
Suggested project layout for the monorepo of a big project
* api/ * a.proto * a.swagger.json (generated) * b.proto * b.swagger.json (generated) * assets/ * logo.png * build/ * ci/ * script.sh * package/ * script.sh * configs/ * prod.json * dev.json * deployments/ * c/ * docker-compose.yml * d/ * docker-compose.yml * docs/ * files.md * examples/ * descriptive-dirname/ * ... * githooks/ * pre-commit * go/ * cmd/ * mybinary/ * main.go * internal * e/ * doc.go * e.go * f/ * doc.go * f.go * pkg/ * g/ * doc.go * g.go * h/ * doc.go * h.go * Makefile * go.mo * js/ * test/ * testdata/ * blob.json * tools/ * docker-protoc/ * Dockerfile * script.sh * Makefile * Dockerfile
Interesting links and quotes I loved
I however also run into cases where I end up accidentally writing Java-style interfaces - typically after I come back from a stint of writing code in Python or Java. The desire to overengineer and “class all the things” something is quite strong, especially when writing Go code after writing a lot of object-oriented code.
TL;DR — The House (Business) Always Wins – In my 15-year involvement with coding, I have never seen a single business “converge” on requirements. They only diverge. It is simply the nature of business and its not the business people’s fault.
TL;DR - Duplication is better than the wrong abstraction - Designs are always playing catch up to changing real-world requirements. So even if we found a perfect abstraction by a miracle, it comes tagged with an expiry date because #1 — The House wins in the end. The best quality of a Design today is how well it can be undesigned. There is an amazing article on write code that is easy to delete, not easy to extend.
TL;DR — Wrappers are an exception, not the norm. Don’t wrap good libraries for the sake of wrapping.
TL;DR — Don’t let <X>-ities go unchallenged. Clearly define and evaluate the Scenario/Story/Need/Usage. Tip: Ask a simple question — “What’s an example story/scenario?” — And then dig deep on that scenario. This exposes flaws in most <X>-ities.
Industrial programming means writing code once and maintaining it into perpetuity. Maintenance is the continuous practice of reading and refactoring. Therefore, industrial programming overwhelmingly favors reads, and on the spectrum of easy to read vs. easy to write, we should bias strongly towards the former.
Looking at interfaces as a way to classify implementations is the wrong approach; instead, look at interfaces as a way to identify code that expects common sets of behaviors.
Instead of making code easy-to-delete, we are trying to keep the hard-to-delete parts as far away as possible from the easy-to-delete parts.
Write more boilerplate. You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.
I’m not advocating you go out and create a /protocol/ and a /policy/ directory, but you do want to try and keep your util directory free of business logic, and build simpler-to-use libraries on top of simpler-to-implement ones. You don’t have to finish writing one library to start writing another atop.
Layering is less about writing code we can delete later, but making the hard to remove code pleasant to use (without contaminating it with business logic).
You’ve copy-pasted, you’ve refactored, you’ve layered, you’ve composed, but the code still has to do something at the end of the day. Sometimes it’s best just to give up and write a substantial amount of trashy code to hold the rest together.
Business logic is code characterized by a never-ending series of edge cases and quick and dirty hacks. This is fine. I am ok with this. Other styles like ‘game code’, or ‘founder code’ are the same thing: cutting corners to save a considerable amount of time.
The reason? Sometimes it’s easier to delete one big mistake than try to delete 18 smaller interleaved mistakes. A lot of programming is exploratory, and it’s quicker to get it wrong a few times and iterate than think to get it right first time.
the whole step 5 is <3
I’m not suggesting you write the same ball of mud ten times over, perfecting your mistakes. To quote Perlis: “Everything should be built top-down, except the first time”. You should be trying to make new mistakes each time, take new risks, and slowly build up through iteration.
Instead of breaking code into parts with common functionality, we break code apart by what it does not share with the rest. We isolate the most frustrating parts to write, maintain, or delete away from each other.; We are not building modules around being able to re-use them, but being able to change them.
When a module does two things, it is usually because changing one part requires changing the other. It is often easier to have one awful component with a simple interface, than two components requiring a careful co-ordination between them.
The strategies I’ve talked about — layering, isolation, common interfaces, composition — are not about writing good software, but how to build software that can change over time.
A common fallacy is to assume authors of incomprehensible code will somehow be able to express themselves lucidly and clearly in comments.
An introduction presentation I made about Cryptography for Developers.