I have been using Linux as my daily driver—both professionally and personally—for many years. My fascination with kernel internals has led me to build the kernel myself and boot it inside a virtual machine (I use QEMU). I’m planning another blog post about kernel compilation, but for now I’ll focus on how to create a bootable ISO image for Linux.
In this article of the series, we will see how to run models using LibTorch, how to use it for inference. In the first article of the series, while explaining usage scenarios, I mentioned that you can use a model trained in Python for inference in C++ and overcome some bottlenecks. Now we will train a model in Python (to not waste time, I will use a pre-trained model), save it, load it in C++ and perform inference.
In this article of the series, I will explain how tensors are created, accessed, and modified in LibTorch. If you haven’t read the introductory article where I explained what LibTorch is and what can be done, I recommend starting from that article. At this point, remember that this article could have been much longer, but it has been filtered to provide all possible simplicity but sufficient information, and the main reference is the documentation itself.
Many of us have encountered various discussions in different forums about which is the best language for Machine Learning or its popular subfield Deep Learning (and usually the best ML library discussions are added to it). If you haven’t encountered it yet, don’t get too excited, you will soon. In my opinion, the most accurate answer to this question can be “it depends on the situation”. Also, Elon Musk’s tweet on February 2, 2020, was a good indicator of this.