Author: Gökmen Görgen

  • Key Museum

    I discovered a beautiful car museum in Izmir. I flew to Izmir at the invitation of my former colleague and we met at Adnan Menderes airport. We rented a car to avoid wasting time on public transport and drove straight to Torbalı and the museum.

    The museum, which opened its doors in 2015, offers visitors a delightful journey through the evolution of automobile history. Upon entering, one is greeted by a fascinating array of iconic cars representing different eras, from the past to the present. The museum’s collection not only showcases well-known classic cars, but also includes rare gems that spark the imagination and transport visitors to bygone times. Apart from the impressive car display, the museum features dedicated areas for motorcycles, accessories, and intricate car mockups, adding depth and variety to the overall experience. Describing every nook and cranny of the museum would spoil the element of surprise for future visitors, but rest assured, there is much more to explore and marvel at. For those with an interest in automotive history, or simply a curiosity to witness the evolution of transportation, this museum comes highly recommended. Whether you’re a fervent car enthusiast or simply passing through the area, it’s an experience not to be missed.

    Below are pictures of some of the cars I have chosen, along with a description and why they are special to me.

    Alfa Romeo Giulietta

    One-door convertible (cabriolet), red color, 1963 model. It has an owl-like design with a bird’s beak grille and round headlights. The window edges, wipers, bumper, bonnet, and doors are all chrome-plated. There is a mirror on the left side and in the center above the dashboard for looking back, but not on the right side. Since the steering wheel is also on the left, you have to do the famous shoulder view check, which the Germans call “Schulterblick”, looking back from the right shoulder to look at the right rear.

    Even though it has not been renewed for a few years, the Giulietta series still exists on the roads as a 5-door C segment vehicle. Alfa Romeo now belongs to the Stellantis group, I guess we will see new models with the engine of this group. But I don’t know if the old Alfa Romeo spirit will remain.

    MG MGA MKII

    Blue in colour, the side mirrors on both sides are not at door level, but a little further forward, on the sides of the bonnet. Again, as with Alfa Romeo, there are round headlights, and again, chrome details are favored in many places. I think this was the fashion of the 1960s. Alfa Romeo was an Italian brand, MG a British one. But there is a strange similarity in the designs. Again a single door, again a convertible. I wonder what the rivalry was between them.

    I have never seen MG on Turkish roads, but I tried to understand what the Chinese company SAIC aims after buying MG. It is a bit like the Volvo – Geely relationship, or Lancia – Stellantis. Instead of creating a brand from scratch, they are trying to capture the old market by using the pre-existing image. So MG used to have some prestige.

    Porsche 356 A

    Another 1957 car, this time a Porsche. How similar the designs were back then. Again single doors, again chrome details, again convertible. But this time there’s no radiator grille, and that’s the most obvious difference. The engine is in the back, and it’s rear-wheel drive.

    The grey colour suits it very well. I was playing a similar car in Need For Speed: Porsche Unleashed, I think it was a coupe model with a convertible top.

    This model no longer exists; but I think the 911 has a similar design. It is kind of a successor to the 911, and as far as I know, while all of Porsche’s models are turning into electric vehicles, the 911 will still continue to have an internal combustion engine. I think it will fight until 2035 for an engine that runs on a fuel with low or near-zero emissions.

    Jaguar E-Type

    Yet another car that I didn’t see in real life, but I know that this design is iconic and remember this car in some popular old and new movies. The most distinctive feature of this car is that the front bonnet is very long and there’s a hump running down the center of the bonnet. The front bumper is there on the right and left side, not in the middle, because there is a radiator grille and there is not even a place to mount the number plate. There is also a wheel design similar to the wheels of old bicycles. It’s a roadster; if I had a garage, it’d be a nice alternative to the Mazda MX-5. But it’s so old, I don’t know if they make them anymore.

    Jaguar is now more prominent with its SUV models. In fact, the autonomous driving technology company called Waymo in the USA prefers these SUV models of Jaguar for their projects.

    BMW E30 M3 Coupe & GTS

    This is the design that comes to my mind when I think of BMW. A couple of round headlights on the left and right, grille with chrome ornaments in the centre, M3 logo, spoiler; light, small but powerful car. It already had a beauty that defies the years, I don’t understand why they tried different things. The next orange car in the photo was M3 GTS, which was released in 2011. It may be the last one that I followed. Oh wait, M2 2023 may be an exception, it’s still beautiful and doesn’t seem like the other BMW models.

    There were quite a lot of BMW models in this museum, because I understand that the owners of the museum have a business partnership with BMW in Turkey. Therefore, if you like BMW, I recommend you to visit the museum.


    I am not a classic car lover or someone who rejects modern technologies. However, in order to understand the identity of some automobile brands, to sympathize with and recognize some of them, it is necessary to know and learn the history of these brands. Automobile museums exist precisely for this purpose. It is not enough just to visit a museum. One of my short-term goals is to visit the Rahmi Koç museum (again), the Classic Remise museums in Berlin or Düsseldorf, the Mercedes museum in Stuttgart, the BMW museum in Munich and all other museums that can be visited and to have a culture about cars.

  • Curiosity in Children

    This isn’t advice, but I’d like to share what I did to get my daughter interested in coding. Like any parent, I showed her all sorts of things to learn coding at a young age, it was zero curiosity. After a while I couldn’t understand why I had done this, I just put it on the shelf and let her do whatever she wanted, I just watched and expected her to ask for something. She said she wanted to write and draw on the walls of her room, I said yes, you can, on the other hand I was thinking about the price of repainting the walls. But anyway.

    Then, while I was playing a game on my Nintendo, the good questions came. “Are you playing this thing?”, “Can it jump?”, “Can you change its clothes?”, “Does it have this function?”… That was a good point to start showing her something about coding. I started by showing her how to play Minecraft and Animal Crossing. She played the games in creative mode for a while. Even though what she can do in these games is very broad, she felt restricted in some places. For example, even though there was no cooking feature in Animal Crossing in the past, she thought she could put things in the fridge and cook with them.

    Then the expected question came. She said, “How are these features added to this game, can I add them too?” I explained that it is not so easy, but there are ways to do it. First of all, you have to learn how to code, design, use programmes… In the meantime, she had already digitised her drawing on her own, using applications thanks to her tablet and pen, drawing what she wanted to do in the game, adding simple animations, even video and voice-over. But again, she realised that something was missing.

    Her eyes were used to coding from my monitor since she was a baby. One day she came to me and said, “Dad, I want to code a game too. So I introduced her to Swift Playgrounds. We haven’t finished it yet, but I think she’ll be able to finish it when she has time from her homework.

    The funny thing is that this is the same app, Swift Playgrounds, that I showed her a couple of years ago to teach her how to code bu she was disinterested. The second thing I would say is that I think code.org, Tinkercad, Scratch, it doesn’t matter. The first step should be to foster a sense of curiosity. Isn’t that how we started? If there’s no curiosity, I don’t think it’s a good idea to teach kids to code before they ask. Honestly, I didn’t want my dad to teach me his business when I was a kid.

  • Configuration Updates With Emacs 29

    As I mentioned in my last post, I started using VSCode to see what I’m missing and I thought that it’s a good time to take all the risks and break my Emacs configuration. In the last changes, I tried to use the built-in alternatives of the packages like eglot instead of lsp-mode. Now I’ve made the decision to update Emacs to version 29.

    When I first switched to Emacs 28, all my performance problems were solved thanks to native compilation support. Most of these problems were caused by LSP1 and it was very annoying to wait even half a second to see syntax errors in the code. Now I don’t close Emacs on the server for months. This was the most significant change to Emacs 28 that I can’t forget.

    Now, if you ask what the equivalent important update is on Emacs 29, I guess I can claim that Emacs has become quite useful even if you don’t install any extensions. Many extensions I used are now included in the project, for example:

    • Modus-Vivendi and Modus-Operandi, the themes I have been using for years and cannot give up, now come in.
    • The package configuration tool use-package is now included.
    • The internal LSP service eglot is now available. I’ve previously used the lsp-mode extension.
    • For tree-sitter support, nothing needs to be installed any longer.

    Furthermore, TRAMP2, which functions similarly to VSCode’s remote development extensions, has started to support Docker and Kubernetes; however, I have not yet verified whether LSP can be used in a remote development environment. I prefer to use MOSH3 to connect to the server and execute Emacs remotely, unless this occurs.

    As many extensions were included, I also reviewed the other extensions I used. Although it is possible to use Emacs without any installed extensions, it is very difficult for me to give up some extensions. For instance, magit is far superior to many competitors, both as an Emacs and a git client.

    Another extension that I can’t give up is doom-modeline. The default mode-line view in Emacs is also quite ineffective. It wastes space, uses the right half of the line very inefficiently, and shows minor modes needlessly. I began utilizing doom-modeline to address all of these issues. You can see the screenshot to see how it looks:

    With Emacs 29, I suppose the major modification I made was to cease using popups for text completion. I’m not sure if it’s a good idea yet, but I’ll keep trying. I use all of Emacs’ capabilities that make use of minibuffers considerably more effectively by using the following extensions:

    • consult: an extension for minibuffer completion.
    • vertico: a minibuffer completion UI. It’s necessary for consult.
    • orderless: completion style with various algorithms to find more search results.
    • marginalia: an extension to see more information about the search results.

    When I started using minibuffer instead of popup for completion, there was a lot of space left for a second text completion. so I thought why not use copilot or codium. Look at the screenshot to see how it works:

    I didn’t turn on a lot of UI components, like line numbers, tabs, fringe, and scroll bars, so it’s likely that my configuration isn’t optimal for a graphical user interface. And to be honest, I didn’t need it; perhaps I’ll keep the UI in zen mode and continue using Emacs in my preferred terminal program, WeztTerm. But for the time being, I intend to continue working with VSCode. You may find a new version of Emacs with my setup in my emacs.d repository if you want to try it out.


    1. LSP: Language Server Protocol ↩︎
    2. TRAMP: Transparent Remote Access, Multiple Protocol ↩︎
    3. MOSH: Mobile Shell ↩︎
  • Status Update on Emacs

    I’ve used Emacs for years, and it’s not easy to get rid of this archaic (what?) editor. Although it helped me do my job better, my setup was not as stable as in the editor. When I started working as a web developer, I was installing all the project requirements on the local machine, even though I was not using virtual environments. Emacs was good enough to make development possible.

    Then, I started using tools like vagrant, docker, and virtual environments that made it easy to install the project but difficult to use Emacs because I needed to configure the editor to find the right Python interpreter, Node version, etc. Having the language server protocols (LSP) was a great help, but I still had to create a virtual environment for each project.

    Then I switched to remote development; I had a remote server to access the same development environment from all my devices. I was using a good terminal emulator like WezTerm, connecting to the server using MOSH or Eternal Terminal, and running Emacs on the server remotely. This has never been as comfortable as working locally.

    In each case where my setup has changed, I have updated Emacs and reconfigured it for the new environment. If you take a look at my Emacs configuration, you will see that I’m using vanilla Emacs with the minimum configuration, and I try to use built-in packages as much as possible. But now I’m at a new crossroads because I started using AI tools actively and am getting bored waiting for the tools to support my editor. I’m looking at the GitHub Copilot, and even they don’t have a plan to support Emacs officially. You can say that it’s a tactic to force them to use their own editors or that they don’t have time to support all the editors. Maybe it’s better to help the community take initiative for preparing the extension. But anyway, that’s not fair or helpful for maintaining the diversity of the editors.

    Probably it’s not easy to change my habits quickly, but I plan to use VSCode for a while and see what I’m missing. I know that VSCode has good remote development tools and official GitHub Copilot support. During this period, I will update Emacs to have the same features as VSCode and try to use it again.

    For now, I want to compare the two editors for the extensions I’m using in Emacs. These extensions are mostly not needed on VSCode because they are built-in features:

    • company, company-prescient: autocomplete extension.
    • ctrlf: better search and replace.
    • deadgrep: ripgrep integration for Emacs. I’m using the built-in search and find commands on VSCode.
    • diff-hl: shows git diff in the fringe.
    • eglot: LSP client.
    • find-file-in-project: Helpful for finding a file in a project.
    • multiple-cursors: Another extension for editing multiple lines at once.
    • expand-region: Not easy to do this on VSCode (CTRL+CMD+SHIFT+Left what the hell?) but yes it’s a built-in feature.
    • magit: Git app for Emacs. I don’t think that I can find an equivalent in any editor. Don’t suggest me GitLens, please.
    • minions: Hides minor modes on Emacs mode line. Not necessary on VSCode.
    • modus-themes: My favorite theme for Emacs. I’m using the GitHub theme on VSCode.
    • puni: Structured editing tool for Emacs. Not sure if I can find an equivalent on VSCode.
    • vertico, vertico-prescient: Similar to CTRL|CMD+SHIFT+P or CTRL|CMD+P on VSCode.
    • tree-sitter, tree-sitter-langs: Not sure if VSCode needs this extension, but it’s a great synax parsing library for Emacs.
    • unfill: For unfilling paragraphs with a shortcut.

    My Emacs configuration is available on GitHub.

  • Return Back to Blog

    Hey, Twitter is not like in the good old days, I’m switching to Mastodon.

    No no, I’ll not say this. I already tried that, and I couldn’t succeed. Because I was alone and my friends continued to stay on Twitter. Now, after the latest news about Twitter layoffs, I see a similar migration again, but I don’t think it will be successful either. So the main thing I want is to be a platform-independent content creator; if someone wants to reach me, I want them to visit my website, or if someone has negative thoughts about me… Maybe they can write me an email, or I really don’t care.

    I had a plan in mind:

    1. Periodically delete unnecessary tweets on Twitter (98% of them are)

    I wrote a small application in Python because I didn’t want to allow any third-party software. Feel free to look at the project if needed; the name is meep. When I had time, I filtered and reviewed my tweets by year and keyword. Then, I deleted the unnecessary ones and searched for inspiration to write new blog posts.

    2. Create accounts on several alternative social media servers

    I will use my website as microblogging and share my posts on multiple social accounts like Mastodon, Cohost, etc.

    3. Automate the synchronization in one direction

    Using applications such as IFTTT and Zapier, I have automatically shared my new blog posts to social media via RSS.

    After that, I don’t care anymore if Twitter shuts down or a Mastodon server crashes. I have the data, and it is always visible on my website. I advise.

  • Remote Development Environment

    When I started using Windows in 2014, I continued to keep my development environment on Linux. I first did this with Vagrant; then, I used WSL, Docker, WSL2. I had an idea forming in my head ever since. I’m already using two different systems on a computer simultaneously; is it possible to use one in a thin client and connect to the other remotely?

    PURPOSE

    Let me first start by explaining why I want to do this:

    • I need a powerful computer to be able to use two systems at the same time. But, on the other hand, I would like to use a lighter computer with better screen quality and longer battery life instead of a more powerful computer.
    • I already have a computer provided by the company I work for, and I don’t want to move two computers at once. And I don’t want to keep my files and hobby projects on the workstation.
    • Sometimes I want to connect to my development environment from another device such as a tablet and phone. To do this, I have to keep my computer turned on all the time.

    Now let’s look at where I can keep the development environment and investigate the other alternatives.

    PLAN

    Creating a virtual server using a VPS service like DigitalOcean or AWS Lightsail was my favourite option; however, when the need for RAM increased because of Docker, it’s possible to buy a Mac Mini every two years with the cost of VPS. So I eliminated this option.

    My first thought was to make a portable PC using Raspberry Pi 4. Since I don’t have enough experience in this subject, I thought I could do it; but as the system requirements increased, there were quite a few problems, and it moved away from the structure I imagined. For example, when I added an SSD, there was a heat problem; I had to add a heatsink to solve it. In addition, it needed an adapter to meet the power, and it wasn’t easy to find an excellent case to match it.

    Even though I looked for ready-made solutions later, I couldn’t find a product that I would like. So finally, when the performance achieved according to the cost did not satisfy me, I turned to other alternatives.

    There’s no advantage of having a laptop for the development environment, but I wanted the device to be portable because I occasionally travel. I also thought of keeping the server machine in the same place all the time, but I wanted to be able to physically access the device in case of an emergency such as a system error or power outage.

    I also looked at other alternatives such as Intel NUC and Beelink; I was disgusted with their brick-like adapters, so I preferred buying a Mac Mini because the case already includes the power supply. However, I still don’t understand why they can’t fix this adapter issue on PCs.

    CONNECTION

    After purchasing the machine that will provide my development environment and installing it at home, I made the first updates. First, I reviewed the power consumption settings to prevent the device from going into sleep mode.

    Then I determined the programs I would use to connect to the computer. I’ll explain later why I use multiple methods and tools to connect to a device:

    1. TAILSCALE: The first thing I need to know to connect the server is a fixed address of my server. I’m using TailsScale, which uses WireGuard technology for that. For example, when I set the name of my server as gerudo, I can connect it by saying ssh gerudo. The device I use as a client must also be registered in my TailScale account to find the address where gerudo is registered.
    2. SSH & MOSH: The most necessary applications I used in the terminal are SSH and Mosh. Mosh prepares a more stable connection than SSH, but you need to configure your terminal to support 24-bit colours.
    3. REALVNC: There is no need to install a separate VNC application for macOS, but you can see the machine’s address when you log in with the RealVNC account. But if you are using TailScale, you can connect with any VNC client by saying vnc://machine-name. I only use it when necessary.
    4. DUET DISPLAY: It helps me to use my iPad as a primary or secondary display.

    The server-side of the development environment is ok; now we can forget that part at home. Let’s come to the client part, which we will always have. You decide according to your situation, I will explain my preferences:

    1. WORKSTATION: I can access my editor via SSH or MOSH from the computer provided by the company, and I can access the desktop with any VNC client.
    2. IPAD: It’s better to use a tablet instead of carrying two laptops for me. I use the iPad to access my private files and use it as a second display for the workstation. Sometimes I don’t take my workstation, and I connect to my server using Blink from the iPad.
    3. IPHONE: Although I prefer not this method, it’s cool to access the graphical interface and do small tasks on the server. I just use RealVNC.

    As I said before, I use TailScale on all devices. Maybe not needed on the phone because RealVNC already knows the machine’s address via the user account.

    Now let me explain why I use multiple connecting methods to the server. Don’t think that this whole structure is working smoothly; it’s not easy as it seems; there are at least a few critical problems I experienced once.

    PROBLEMS

    The first problem is that you need a display until you prepare your login account on Mac Mini for the first time. If I could use Raspberry Pi 4 as a server, it wouldn’t have this problem, but there is such a problem for Mac Mini, unfortunately.

    The other problem is making the first internet connection when you move the server to a different place. If you use iPhone, you don’t need to worry about that because it shares connection info between the devices. But what if you don’t have an iPhone or iPad? So yes, you need a display again.

    Another problem is with Duet Display. If the server restarts and you log out somehow, you need to log in again to use Duet. It is possible to solve this problem by automatic login; however, it’s creating security weaknesses, and it’s not a reasonable option. So instead, I connect with a VNC client and log in once. Then I continue with Duet Display.

    Let’s talk about work-related issues a bit. I’m neither not a frontend nor a mobile developer so I can work with just a terminal. Sometimes I’m interested in game development; I plan to use Duet Display for that. If I were a frontend user, I think I could use my workstation as a client and keep the code on my server; and I would prefer to debug the code from the browser on my workstation. On the other side, I think none of what I said would work in mobile development. If you have any suggestions on that, I would love to know.

    FINALLY

    Due to the frequency of power and internet cuts in Turkey, adding UPS (uninterruptible power supply) to this kind of remote development environment is necessary. Apart from that, you can get paranoid and consider making a mini-robot that you can control with wifi and press the computer’s power button.

  • Pair Programming

    Pair programming has been a known and applied method for a long time. I know that it has more than one purpose and practice, but I want to say some things about an expert and a novice working together.

    Investigation

    For a long time, I went to the dental clinic for braces treatment, and after my treatment was over, I had surgeries two times (no worries, I’m fine). Sometimes there was more than one doctor at my appointments. One was relatively more experienced than the other and would generally instruct the less experienced to do what they would have done. The novice one was following the instructions without asking any questions because there was an actual patient, and they were trying to imitate the expert one as much as possible. After a while, the novice was now taking care of the appointments alone.

    Another thing I observed was that not every novice doctor always matched with the same expert. I think that is an essential detail because it explains that experts also go through similar processes.

    Pair Programming

    In the projects I’m working on, there are code reviews, we’re writing documentation, and we update it as new questions come in from novices. Also, I don’t miss to say to the novices that they can ask me any questions, then I tell them that I’ll answer when I’m available, but none of these solves the main problem. So I realized too late the importance of pair programming.

    To elaborate on the main problem, let’s think about the topic using the same metaphor: Consider there’s a novice doctor who cares for you for the appointment, and there’s no one else that helps them. They can’t make the patient wait for asking a question to an expert doctor and can’t record the operation during the treatment and ask for a talented doctor to review how they do the treatment later. So the patient comes to the appointment, they do what they should do, and the patient goes out when the doctor completes their work.

    To speed up the adaptation of a novice to the team and save time for other friends in the team, it may be a good idea to give them a critical job that will directly affect the quality of the production, directing them verbally or in writing. So the novice will implement the workflow a few times under the observation of an expert, and the expert will let them take a risk.

    Let’s move on to the second important issue, the case of experts and novices randomly matched every time. There are a few key benefits this provides:

    1. The novice receives training from multiple experts instead of a single expert. If there is any inconsistency between the information received by the experts, the novice will object and warn the expert.
    2. An expert will resolve this inconsistency by talking with other experts if necessary and will inform the final decision.
    3. Both the novices will meet with multiple experts, and the experts will contact multiple novices. Thus, coworkers will establish relationships with each other one through their positions, not personally.

    I find the benefit mentioned in the last item very important. It is crucial that the projects are independent of the person and that the positions are fixed and maintained by different employees. If a developer can quit a project quickly, the project is independent of the persons. Therefore, we can consider it the success of the developers of the project. The case is also good for the developer because they will strike a snag-less when changing their team or project.

    The Cost of Making Busy the Expert

    In pair programming, the time spent by two people doing the same job will be multiplied by two. However, it can be misleading to calculate the person/time with such straightforward logic. This cost is bearable in the long run to reduce the time it takes for a novice to get used to the job and take on responsibility.

    The key here is to keep an expert busy for a maximum of one or two hours a week, ensure that each expert takes responsibility, and share tasks fairly.

    The second point to reduce the cost is that the job selected for pair programming has to be the expert’s job, not the novice’s. In other words, the expert will give one of their tasks to the novice, observing them if they completed the task as well. In this case, the only cost for the expert will be wasting some time by giving the job to a novice instead of quickly doing it independently. But, as I said before, this cost is bearable in the long run.

    Resources

    As I wondered and researched this subject, I realized that it was a little more complicated subject than I had anticipated. Therefore, without prolonging this article, I would like to inform the reader by sharing some resources.

  • Three Questions to a CTO

    I’m not a CTO of Radity anymore, but I trust that it will succeed more in the future. One of my old coworkers asked me some questions to understand what I was doing in this position. To remember it later, I wanted to share the questions and my answers here.

    What are the most critical three needs and goals of a CTO?

    • To ensure that the IT organization does business uninterruptedly and efficiently.
    • To follow the news about IT and integrate the tools if there’s something new to facilitate its operation.
    • To save the company know-how in safe and up-to-date, and help the new coworkers to adapt to their teams quickly.

    In what topic did you have the most problems?

    I’m having a problem contacting the correct people. I always have to keep my network up-to-date for that.

    What kinds of tasks do you have?

    I have to work continuously to reach my goals what I told in the first question. On the other side, I’m always talking with my coworkers to discover their potentials and evaluate them in the correct projects or tasks. I love to share my ideas with them; I listen to their thoughts about my decisions and consider their ideas when I decide on something new that affects the culture and future of the company.

  • How to Fix Default Fonts Problem on Firefox

    Most of the websites are using Helvetica as an alternative font, and Ubuntu is trying to set Nimbus Sans as a similar font even you have Helvetica font. Also, the default font on Firefox is DejaVu Sans, but the default fonts are working just if the styles don’t specify a font family in a website.

    Nimbus Sans comes with fonts-urw-base35 package, and it’s ubuntu-desktop’s dependency, direct or indirect. I need ubuntu-desktop to use GNOME, so the best solution here is to disable the font altogether and refresh the fonts cache.

    Let’s go step by step. First of all, if you don’t have it yet, start with installing ttf-mscorefonts-installer package, Arial is the best alternative font for Helvetica, and it comes with that package. Then, confirm if the system finds Nimbus Sans as an alternative font for Helvetica:

    $ fc-match "Helvetica"
    NimbusSans-Regular.otf: "Nimbus Sans" "Regular"

    Ok, so we’re sure that we understood the problem correctly. We will try to reject that weird font and recheck the output until it finds Arial. To do that, we need to create a fonts.conf file and keep it on the correct path. On my Ubuntu (20.04), the path will be ~/.config/fontconfig/fonts.conf, but if you’re using a different distribution, learn the correct path from Google.

    After you created the file, fill it with these lines:

    <selectfont>
      <rejectfont>
        <patelt name="family">
          <string>Nimbus Sans</string>
        </patelt>
      </rejectfont>
    </selectfont>

    Then, let’s clear the fonts cache and recheck the fc-match output:

    $ fc-cache -rv
    $ fc-match "Helvetica"
    n019003l.pfb: "Nimbus Sans L" "Regular"

    Okay, yet another weird font. Let’s ignore this font too and try again:

    <selectfont>
      <rejectfont>
        <patelt name="family">
          <string>Nimbus Sans</string>
        </patelt>
      </rejectfont>
      <rejectfont>
        <patelt name="family">
          <string>Nimbus Sans L</string>
        </patelt>
      </rejectfont>
    </selectfont>
    $ fc-cache -rv
    $ fc-match "Helvetica"
    Arial.ttf: "Arial" "Regular"

    Finally! Now we can restart the Firefox and check the fonts on the browser:

    That’s all.