|
|
I was sorely remiss not to blog more during the Plasma 5.9 dev cycle. While 5.9 packs a fair amount of nice new features (e.g. here's the widget gallery in Application Dashboard at some point during development), there was not a peep of them on this blog. Let me do better and start early this time! (With 5.9 out today ...)
Folder View: Spring-loading

Folder View in Plasma 5.10 will allow you to navigate folders by hovering above them during drag and drop. This is supported in all three modes (desktop layout, desktop widget, panel widget), and pretty damn convenient. It's a well-known feature from Dolphin, of course, and now also supported in Plasma's other major file browsing interface.
Folder View packs a lot of functionality - at some point I should write a tips & tricks blog on some of the lesser known features and how they can improve your workflow.
Performance work in Folder View ... and elsewhere!
But that's not all! I've also been busy performance-auditing the Folder View codebase lately, and was able to extract many savings. Expect massively faster performance scrolling big folders in big Folder View widgets, lower latencies when navigating folders, and greatly improved Plasma startup time when using Folder View widgets on the desktop. In the case of big folder + big widget, a 5.10 Folder View will also use quite a bit less memory.
I've done similar analysis of other applets, e.g. the Task Manager and the Pager, and done both smaller improvements or looked into more fundamental Qt-level issues that need addressing to speed up our UIs further.
Others on the Plasma team have been up to similar work, with many performance improvements - from small to quite large - on their way into our libraries. They improve startup time as well as various latencies when putting bits of UI on the screen.
While it's still very early in the 5.10 cycle, and it won't be shy on features by the end, performance optimization is already emerging as a major theme for that upcoming release. That's likely a sign of Plasma 5's continuing maturation - we're now starting to get around to thoroughly tuning the things we've built and rely on.
Following the recent addition of easy DBus service snapping in the snap binary bundle format, I am happy to say that we now have some of our KDE Applications in the Ubuntu 16.04 Snap Store.

To use them you need to first manually install the kde-frameworks-5 snap. Once you have it installed you can install the applications. Currently we have available:
The Ubuntu 16.04 software center comes with Snap store support built in, so you can simply search for the application and should find a snap version for installation. As we are still working on stabilizing Snap support in Plasma’s Discover, for now, you have to resort to a terminal to test the snaps on KDE neon.
To get started using the command line interface of snap you can do the following:
sudo snap install kde-frameworks-5 sudo snap install kblocks
All currently available snaps are auto generated. For some technical background check out my earlier blog post on snapping KDE applications. In the near future I hope to get manually maintained snaps also built automatically. Also from-git delivery to the edge channel is very much a desired feature still. Stay tuned.
A couple of years ago during a hackathon, a couple of us wrote a Qt wrapper around PDFium, the open-source PDF rendering engine which is used for viewing PDFs in Chromium. There have been a few fixes and improvements since then. Now we (finally) have made this module available under the LGPLv3 license.
QPdfDocument can render a PDF page to a QImage, and there is QPdfBookmarkModel which can be used for a bookmarks view. There is an example application: a widget-based PDF viewer.
If you want to try it out, here’s how to get started:
git clone git://code.qt.io/qt-labs/qtpdf cd qtpdf git submodule update --init --recursive qmake make cd examples/pdf/pdfviewer qmake make ./pdfviewer /path/to/my/file.pdf

There is some work in progress to add support for PDF in QtQuick, to treat it as an ordinary image format and to add support for multi-page image formats in general, but we don’t have any particular target release date yet.
The post New QtLabs PDF module appeared first on Qt Blog.
|
|
Aunque KDE Blog es un blog que suele escribir solo un servidor, siempre ha estado abierto a colaboraciones. Este es el caso del presente artículo de la escritora que ha vuelto a ser invitada Edith Gómez, editora en Gananci, apasionada del marketing digital y especializada en comunicación online que nos presenta “Recomendaciones para migrar a Linux en tu empresa”.
Linux, el más conocido de los sistemas operativos libres, no está reservado para un tipo exclusivo de individuos. Las grandes empresas como las micro, pequeñas y medianas empresas también pueden utilizar este sistema operativo. En ese sentido, es importante tomar en cuenta una serie de criterios elementales para decidir una adecuada implantación de Linux en la empresa.
Pero primero, ¿qué es Linux? Linux es un sistema operativo libre, esto significa que el código fuente está disponible. Linux es libremente modificable y gratuito. Para ser completamente preciso, Linux es de hecho el núcleo duro alrededor del cual se han desarrollado varios sistemas operativos (también llamados distribuciones GNU/Linux).
Como seguramente te pudiste dar cuenta, la primera ventaja en relación a sistemas operativos pagos, es que Linux es totalmente gratuito, por lo que la organización no se tendrá que preocupar por gastos en licencias de software.
Se pueden encontrar dos tipos de distribuciones Linux: unas enfocadas para las empresas como Red Hat y Novell (SUSE Enterprise) que sí tienen un coste de licencia, pero no por desarrollo del software, sino por servicios de soporte y mantenimiento. Y por otro lado, están las distribuciones sin costes de licencia o de tipo open como Ubuntu, openSUSE, Fedora, etc.
Aquí presentamos algunos consejos para que la migración de la empresa a Linux sea la más armónica:

Insistimos en que no todas las empresas pueden migrar a Linux, sobre todo por los programas primordiales de la empresa, y por el hardware que puedan necesitar. No obstante, si se logra dar solución a estos puntos, puede ser viable la migración a distros Linux.
Además, cabe resaltar los costes de licencias de distribuciones para empresas, que varían según el tipo de servidor y otra serie de aspectos. Para precisar dichos precios lo mejor es acudir al distribuidor de la distribución en cuestión, ya sea Red Hat o Novell.

Quite a while ago already I wrote a launcher menu widget named Simple Menu. It's using the same backend I wrote for our bundled launchers, and it's a little bit like Application Dashboard scaled down into a small floating window, plus nifty horizontal pagination. It's also really simple and fast.
While some distributions packaged it (e.g. Netrunner Linux), it's never been released properly and released - until now! Starting today, you can find Simple Menu on the KDE Store and install it via Add Widgets... -> Get new widgets in your Plasma.
Please note that Simple Menu requires Plasma v5.9 (to be released tomorrow). Actual v5.9, not the v5.9 Beta - it relies on fixes made after the Beta release.
Note that throughout this blog post, the term “Apple Platforms” will be used to refer to all four Apple operating systems (macOS, iOS, tvOS, and watchOS) as a whole.
I thought it would be a good time to share some of the highlights of what’s new in Qt on Apple Platforms with the recent release of Qt 5.8.
To start off, Qt now has Technology Preview level support for Apple TV and Apple Watch devices beginning in Qt 5.8. This completes our offering to support all four Apple Platforms – macOS, iOS, tvOS, and watchOS.
Qt for tvOS is 95% similar to iOS (even the UI framework is also UIKit) and as a result, contains roughly the same feature set and supported modules as iOS, with the notable exception of QtWebEngine.
The major difference between the two platforms is input handling. iOS has a touch-based model that maps fairly well to a traditional point and click desktop interface from a programming perspective, while tvOS has a focus based model. This means that you don’t have a canvas which can receive input at a particular point (x,y coordinate), but rather an abstract canvas in which you “move” the focus to a particular object. When an object has focus, you can use the Apple TV Remote to trigger an action by clicking its trackpad or pressing the Menu or Play/Pause buttons.
Qt supports some aspects of the tvOS input model (scrolling and clicking are recognized), but there will be some degree of manual work handling input events in QML to build a tvOS app. Most basic navigation actions on the Apple TV Remote are exposed as keyboard events in Qt, and gesture recognizers can be used for more complex multitouch input. We are still exploring ways to provide mechanisms that make it easier to work with this new input model.
We’d also like to thank Mike Krus doing the initial work of porting Qt to tvOS, and who is now the platform’s maintainer.
Qt for watchOS is also heavily based on iOS. However, it is not possible to run QML or any other Qt based UI on the watch using public APIs because the primary interface API is WatchKit (as opposed to UIKit). Therefore, Qt can currently only be used for non-UI tasks such as networking, audio, backend graphics processing, etc. If the available watchOS APIs change in the future then we can certainly explore the possibilities at that point.
I am also the watchOS platform maintainer, so please feel free to send me any questions or feedback.
Only static libraries were permitted for most of iOS’s existence, as many iOS developers are no doubt aware. Apple’s App Store requirements prohibited shared libraries even though the underlying operating system supported them since 1.0. Finally, with the release of iOS 8 and Xcode 6 in 2014, shared libraries/frameworks became officially supported by Apple for use in applications submitted to the App Store. Unfortunately, while this feature did not make it in time for the Qt 5.8 release… Qt 5.9 will support them too (also for tvOS and watchOS)!
To build Qt as shared libraries on iOS, all you need to do is pass the -shared option to configure (and this will be the default in the next release). Note that shared libraries require a deployment target of at least iOS 8.0, which will also be the minimum requirement for Qt 5.9.
With shared libraries, you can simplify your development workflow by not having to maintain separate code paths for static libraries on iOS and shared libraries on other platforms. Shared libraries also allow you to:
Because the question will inevitably come up: we do not believe shared libraries will have any practical effect for LGPL users. There are other concerns that come into play with the Apple App Store (namely, DRM) which render the point moot. However, we recommend using shared libraries for the technical reasons outlined above. And as always, seek professional legal advice with regard to software licensing matters.
In the past several Qt releases we have also made a number of significant enhancements to the build system on Apple Platforms, including:
One more thing… we’ve been changing every reference to “Mac OS X” and “OS X” that we can find into “macOS”.
The post What’s New in Qt on Apple Platforms appeared first on Qt Blog.
|
|
En la actualidad, cuando pienso en una distribución para un usuario con muy poca experiencia en el mundo GNU/Linux siempre me viene a la cabeza Linux Mint. Es por ello que me complace recordar que ha sido lanzado Linux Mint 18.1 KDE Edition, la primera gran actualización de esta serie.
El pasado 27 de enero el equipo de desarrolladores de Linux Mint anunciaron que había sido lanzado Linux Mint 18.1 KDE Edition “Serena”, una actualización bastante interesante y que prácticamente deja a esta distribución orientada a los usuarios más inexpertos totalmente al día.
Para los que no la conozcan, Linux Mint es una distribución Linux extremadamente orientada al usuario que destaca por su estabilidad y robustez. Por esta razón los desarrolladores son muy cautos a la hora de introducir novedades.
Por esta razón me sorprende pero me alegra que aunque solo han pasado unos meses desde que Linux Mint dió el salto de KDE 4 a Plasma 5 los desarrolladores han decidido que era el momento de dar otro paso e implementar Plasma 5.8 para esta primera gran actualización de Linux Mint 18.
Bien pensado no es extraño ya que Plasma 5.8 es LTS y ha recibido todo tipo de halagos. De esta forma Linux Mint 18.1 KDE Edition “Serena” seguirá siendo LTS, es decir, de soporte extendido con soporte hasta el 2021, con lo que estamos ante una distribución que habitará en muchos ordenadores durante mucho tiempo.
En resumen, una gran noticia para los simpatizantes de esta distribución y del proyecto KDE ya que los más novatos ya tienen la posibilidad de disfrutar de lo último en escritorios GNU/Linux.
Más información: Linux Mint | What’s new Linux Mint 18.1 KDE

Las novedades principales de Linux Mint 18 KDE Edition son las siguientes:
Más información: Linux Mint
Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) is released today. With this Alpha 2 pre-release, you can see what we are trying out in preparation for 17.04, which we will be releasing in April.
NOTE: This is Alpha 2 Release. Kubuntu Alpha Releases are NOT recommended for:
* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable
Getting Kubuntu 17.04 Alpha 2
* Upgrade from 16.10, run do-release-upgrade from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive
I’ve got a chance to share a part of my upcoming book here. It is an excerpt from the second chapter.
The main feature of all functional programming languages is that functions can be treated like ordinary values. They can be saved into variables, put into collections and structures, passed to other functions as arguments, and also returned from other functions as results.
Functions that take other functions as arguments, or that return new functions are called higher-order functions. Higher-order functions is probably the most important concept in functional programming. As you might know, programs can be made more concise and efficient by describing what the program should do on a higher level, with the help of standard algorithms, instead of implementing everything by hand. Higher-order functions are indispensable for that. They allow us to define abstract behaviours and more complex control structures than those provided by the C++ programming language.
See the whole excerpt in this PDF
This week at work I published an article about the usage of C++ lambda expressions in Qt, hope you like it!
I usually don’t write political blog posts, especially if it relates to a country of which I’m not a citizen off nor live in. While I definitely have very clear opinions and views, I want to stay neutral in this blog and only talk about the technology side of things.
It seems that the new US administration is in the process of shaking-up a lot of traditions and regulations, while also redefining the relations between the USA and the rest of the world. Even though a lot of these changes are very relevant to a lot of people on this planet, I want to focus on three topics that directly affect the IT, the free software world and especially my work at Nextcloud.
Some of you might remember the early times of the web in the 90s where we had the ‘crypto wars’. This was a time period where the US government tried to limit the access to strong cryptography, especially outside the US. The idea was that the US secret services should be able to crack and decrypt every encrypted communication that happens outside the US. For example software like PGP was not allowed to be exported outside of the US, and browsers like Netscape were only allowed to use weak 40Bit SSL keys while the US version supported 128Bit keys.
After a while the US realized that this was a very stupid idea and allowed other countries to also use strong encryption. It seems that the new Attorney General likes the idea of Crypto Backdoors and this is now back on the table. This would obviously be very bad for internet security. The EFF has a good summary.
A lot of organisations and companies are concerned about storing sensitive data on servers and cloud services hosted in the USA. The reason is that the US government organisations are allowed to access all the information and data, and this is something a lot of people and companies don’t agree with. Microsoft and the ‘Deutsche Telekom’ have implemented a workaround, making it is possible to get an Microsoft Office 365 subscription where the data is hosted in a hosting center in Germany. The current judicial interpretation is that this service is covered by the local German law and not the US law.
However, now you can read in the news that it is possible that the US might soon have a different interpretation here. In the near future US agencies might have full access to services where US companies are involved, like in this case of Microsoft. More information can be found here in this article on politico
Two days ago Trump signed an executive order which might kill the Privacy Shield agreement with the EU. This is an agreement which is the successor of Safe Harbor which basically allowed European based companies to use US based cloud services and still be compliant with EU law. If this agreement is being annulled, then this makes all data flow from the EU to US based cloud services illegal. More information from Techcrunch.
All this happened in only the last few days. It is not completely clear yet what the long term impact will be and what else might happen next, but it is safe to say that the security of computer systems, the internet and our privacy is under heavier attack than ever before.
Free software developers, organisations, companies and everyone else who cares about security and privacy should act now. We need to develop and support technology that implements strong cryptography and is distributed and federated. It is becoming very clear that the heavy dependency on US based IT, Cloud and web-services is not good for the rest of the world. One of the main benefits of free software like Linux, KDE, GNOME, ownCloud and Nextcloud is that everyone can host and install it wherever they want, can audit the code to make sure that there are no backdoors, while also being able to adapt it and enhance it however they want.
These are interesting times and we, as software developers, are in a key position to make sure that all people will have access to data privacy tools and secure communication in the future
|
|
Los Sistemas inalámbricos serán los protagonistas en las V Jornadas y Talleres Libres de la UNED de Vila-real. Y con ración doble el próximo mes de febrero ya que contaremos con una ponencia y un taller a continuación. En esta entrada solo os presento la segunda parte: el taller de Sistemas inalámbricos.
Siguiendo de la mano de Héctor Tomás, Ingeniero técnico en Informática de sistemas, para poner en práctica todo lo aprendido en la ponencia que se realizará a las 17 horas sobre complicado y apasionante de la comunicación inalámbrica.
De esta forma pondremos en práctica parte de lo aprendido sobre antenas, routers, frecuencias, software y otros aspectos de interés sobre la comunicación inalámbrica, con lo que podremos sacar más partido a nuestros dispositivos.
El cartel es el siguiente y agradeceríamos que tuviera la máxima repercusión posible, así que por favor compartirlo en vuestras redes sociales.No suelo pedirlo públicamente, pero este tipo de charlas son las que enganchan a la gente al mundo del Software Libre.

La información básica de la charla es la siguiente:
Otras informaciones útiles respecto a las V Jornades Lliures son las siguientes:
El listado de charlas y talleres pendientes es el siguiente:
Agradecemos a la UNED de Vila-real las facilidades para realizar las charlas y los talleres, así como a toda la gente involucrada en las jornadas su buena predisposición.
|
|
It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.
The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.
My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.
KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.
While working on the Qt Visual Studio tools, I had to think about how to locally perform and test the update process for extensions. As already known to most Visual Studio users, the IDE provides a way to setup your own private extension gallery. To do so, one has to open the Visual Studio Tools | Options Dialog and create a new Additional Extension Gallery under Environment | Extensions and Updates.
My initial try was to simply put the generated VSIX into a folder on the disk and point the URL field to this folder. After looking at the Tools | Extensions and Updates dialog, no update was highlighted for the tools. Because this was not as easy as I had thought, so some configuration file probably needed to be provided. A short search on the Internet led to the following article – there we go, it turned out that the configuration file is based on the Atom feed format.
Visual Studio uses an extended version of the Atom feed format consisting of two parts, the Header and the Entry. The header needs to be provided only once, while the entry can be repeated multiple times. Lets take a look at the header tags:
<title> The repository name.
<id> The unique id of the feed.
<updated> The date/time when the feed was updated.
Please note that even though these tags do not seem to be important for the desired functionality, it still makes sense to keep them.
The more interesting part is the Entry tag that describes the actual package you’re going to share. There seem to be a lot of optional tags as well, so I’m just going to describe the most important one. To force an update of your package, simply put an <Vsix> inside the entry tag. It must contain two other tags:
<id> The id of your VSIX.
<version> The version of the VSIX.
As soon as you increase the version number, even without rebuilding the VSIX, Visual Studio will present you with an update notice of your package. For a more advance description of the tags possible, follow this link. If you are looking for the minimum Atom feed file to start with, you can grab a copy of ours here.
So now one might ask, why did I write all that stuff? Well, while I was preparing the next Qt Visual Studio Tools release I scratched my head quite a bit on this topic. And just maybe – it might save some other developer’s day while doing a Visual Studio extension release. Last but not least, I am happy to announce the availability of an updated version of the Qt Visual Studio Tools with the following improvements:
The Qt Company has prepared convenient installers for the Qt Visual Studio Tools 2.1 release. You can download it from your Qt Account or from download.qt.io. They are also available from the Visual Studio Gallery for Visual Studio 2013 and Visual Studio 2015.
For any issues you may find with the release, please submit a detailed bug report to bugreports.qt.io (after checking for duplicates). You are also welcome to join the discussions in the Qt Project mailing lists, development forums and to contribute to Qt.
The post Qt Visual Studio Tools Insights appeared first on Qt Blog.
Another interesting weeks has passed by. We held our first Gothenburg C++ meetup with a nice turn up. We met at the Pelagicore offices in Gothenburg (thanks for the fika) and decided on a format, the cadence and future topics for the group. If you want a primer in C++ and Qt in the next few months, make sure to join us! All the details are on the gbgcpp meetup page. For those of you not based in Gothenburg, there is a Sweden C++ group based in Stockholm.
Some other, more work related news: Pelagicore are changing offices in Gothenburg and it will be an Upgrade with a capital U! We’ve signed for a really nice office space just across the street from the opera and less than five minutes from the central train station and the Brunnsparken public transport hub of Gothenburg. And as always – we are looking for developers – hint, hint ;-)

Finally, do not forget that fosdem starts in a week. I’m going, so I’ll see you in Brussels!
As many of you may know, QPainter has a multi-backend architecture and has two main paint engine implementations under the hood in Qt 5: the raster paint engine and the OpenGL2 paint engine, designed for OpenGL ES 2.0.
While in many ways the raster paint engine can be considered one of Qt’s crown jewels, let’s now talk about the other half: the GL paint engine that is used when opening a QPainter on
What about modern OpenGL, though?
That is where the problems started appearing: around Qt 5.0 those who were in need of a core profile context due to doing custom OpenGL rendering that needed this ended up running into roadblocks quite often. Components like Qt Quick were initially unable to function with such contexts due to relying on deprecated/removed functionality (for example, client-side pointers), lacking a vertex array object, and supplying GLSL shaders of a version the support for which is not mandated in such contexts.
In some cases opting for a compatibility profile was a viable workaround. However, Mac OS X / macOS famously lacks support for this: there the choice has been either OpenGL 2.1 or a 3.2+ core profile context. Attempts to work this around by for instance rendering into textures in one context and then using the texture in another context via resource sharing were often futile too, since some platforms tend to reject resource sharing between contexts of different version/profile.
Fortunately, during the lifetime of Qt 5 things have improved a lot: first Qt Quick, and then other, less user-facing GL-based components got fixed up to be able to function both with core and compatibility contexts.
As of Qt 5.8 there is one big exception: the GL paint engine for QPainter.
The good news is, this will soon no longer be the case. Thanks to a contribution started by Krita, see here for some interesting background information, QPainter is becoming able to function in core profile contexts as well. Functionality-wise this will not bring any changes, rendering still happens using the same techniques like before.
In addition to fixing up the original patch, we also integrated it with our internal QPainter regression testing system called Lancelot. This means that in addition to raster (with various QImage formats) and OpenGL 2, there will also be a run with a core profile context from now on, to ensure the output from QPainter does not regress between Qt releases.
All in all this means that a snippet like the following is now going to function as expected.
class Window : public QOpenGLWindow {
void initializeGL() override {
QSurfaceFormat fmt;
fmt.setVersion(4, 5); // or anything >= 3.2
fmt.setProfile(QSurfaceFormat::CoreProfile);
setFormat(fmt);
}
void paintGL() override {
QPainter p(this);
p.fillRect(10, 10, 50, 50, Qt::red);
...
}
...
};
Qt 5.9.
The patch has now been merged to qtbase in the ‘dev’ branch. This will soon branch out to ‘5.9’, which, as the name suggests, will provide the content for Qt 5.9. Those who are in urgent need of this can most likely apply the patch (see QTBUG-33535) on top of an earlier version – the number of conflicts are expected to be low (or even zero).
That’s all for now, have fun with QPainter in core profile contexts!
The post OpenGL Core Profile Context support in QPainter appeared first on Qt Blog.
The past couple of months an elite team of KDE contributors worked on a top-secret project. Today we finally announced it to the public.
The KDE Slimbook
Together with the Spanish laptop retailer Slimbook we created our very first KDE laptop. It looks super slick and sports an ever so sexy KDE Slimbook logo on the back of the screen. It will initially come with KDE neon as operating system.
Naturally, as one of the neon developers, I was doing some software work to help this along. Last year already we switched to a more reliable graphics driver. Our installer got a face-lift to make it more visually appealing. The installer gained an actually working OEM installation mode. A hardware integration feature was added to our package pool to make sure the KDE Slimbook works perfectly out of the box.
The device looks and feels awesome. Plasma’s stellar look and feel complements it very well making for a perfect overall experience.
I am super excited and can’t wait for more people to get their hands on it, so we get closer to a world in which everyone has control over their digital life and enjoys freedom and privacy, thanks to KDE.
The last post from my colleague Marc Mutz about deprecating Q_FOREACH caused quite an uproar amongst the Qt developers who follow this blog.
I personally feel that this was caused fundamentally by a perceived threat: there is a cost associated to porting away a codebase from a well-known construct (Q_FOREACH) to a new and yet-undiscovered construct (C++11’s range based for), especially when the advantages are not that clear. (With stronger advantages, maybe people would be more keen to move away).
A somehow opposite argument can be applied to Qt itself, however. There is a cost at keeping Q_FOREACH around in Qt. For instance:
for, so it’s not suitable for usage in a general purpose library such as Qt (remember: in a library, all the codepaths are hot codepaths for some user);Q_FOREACH, Boost.Foreach and the range based for.This is not going to be another post about Q_FOREACH; this is a blog post about API deprecation in general. And, indeed: all of the above arguments apply to any API that is going through the deprecation process.
Any product needs to evolve if it wants to remain competitive. If the development bandwidth is finite, from time to time there is the need to drop some ballast. This does not necessarily mean dropping working functionality and leaving users in the cold. At least in Qt most of the time this actually means replacing working functionality with better working functionality.
This process has being going on in Qt since forever, but since Qt 5.0 we’ve started to formalize it in terms of documentation hints and macros in the source code. This was done with a precise purpose: we wanted Qt users to discover if they were using deprecated APIs, and if so, let them know what better alternatives were available.
And since the very release of Qt 5.0.0 we’ve officially deprecated hundreds of APIs: a quick grep in qtbase alone reveals over 230 hits (mostly functions, but also entire classes).
Here’s some good news: many developers are probably using deprecated APIs right now, and are not even noticing. Those APIs are working as expected, compile flawlessly, pass all the tests, and so on. Again, this has a reason: even though some API is deprecated, the Qt source compatibility promise holds. Our contract with our users is that we will keep all of our released APIs fully working for the entire Qt 5 major release, including the ones that have been deprecated.
Here’s some really bad news: many developers are probably using deprecated APIs right now, and are not even noticing! Apart from the possibility of not seeing those APIs any more in Qt 6, the real issue here is: developers are not using APIs that could make their code go faster, be more secure, or more flexible — that is, the APIs that are available right now to replace the deprecated ones.
There’s no excuse for deliberately maintaining a sub-standard codebase, so let’s get to work…
The big question is: how can Qt users figure out that they’re using deprecated APIs in the first place? Luckily for them, Qt has a solution. Every time an API gets deprecated, we tag it in the source code with a few macros. Let’s take a real-world example, from qtbase/src/corelib/tools/qalgorithms.h:
#if QT_DEPRECATED_SINCE(5, 2)
template <typename RandomAccessIterator>
QT_DEPRECATED_X("Use std::sort") inline void qSort(RandomAccessIterator start, RandomAccessIterator end) { ... }
#endif
There are two macros used here. The first one is QT_DEPRECATED_SINCE(major, minor), which conditionally expands to true or to false, depending on whether we want to enable or disable APIs deprecated up to (and including) Qt version major.minor. The parameters used in this example mean that the qSort function has been deprecated in Qt 5.2.
The second one is QT_DEPRECATED_X(text), which carries a text (there’s also a version without an argument, called QT_DEPRECATED). This macro conditionally marks the declaration as a deprecated, using a compiler-specific way; in standard C++14 this would correspond to the [[deprecated("text")]] attribute. The text argument represents a porting hint for the developer in case she gets a warning from the compiler; the warning would therefore suggest to use std::sort instead of qSort.
As you may’ve guessed, those conditionally make all the difference. Under which conditions do these macros trigger and let us know that we are using deprecated APIs? As we noted before, all of this machinery is actually disabled by default: user code compiles with no warnings even when using such APIs. The two macros work as follows:
QT_DEPRECATED_SINCE allows us to get compile-time errors if we use deprecated APIs.
QT_DEPRECATED_SINCE(major, minor) compares major, minor with the Qt version represented by the QT_DISABLE_DEPRECATED_BEFORE macro. This macro uses a version encoded as 0xMMmmpp (MM = major, mm = minor, pp = patch).
The actual comparison that happens is this:
// QT_VERSION_CHECK turns its arguments in the 0xMMmmpp form #define QT_DEPRECATED_SINCE(major, minor) (QT_VERSION_CHECK(major, minor, 0) > QT_DISABLE_DEPRECATED_BEFORE)
If the comparison fails, the entire #if block guarded by a QT_DEPRECATED_SINCE gets discarded by the preprocessor. This means we will not get a declaration for a given name (like qSort, in our example), and therefore trying to use it in our code will trigger a compile error, just as wanted.
Unless we specify otherwise, QT_DISABLE_DEPRECATED_BEFORE is set automatically to 0x050000, i.e. Qt 5.0.0. We can easily raise the bar, for instance by adding this line into our .pro file:
# disable all the deprecated APIs in Qt <= 5.8 DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x050800
The reason why this deprecation functionality is versioned has to do with future-proofing. We cannot possibly foresee which APIs will get deprecated in Qt. (Actually, we can, because discussions happen in the open on the Development mailing list and on Qt’s code review system. But that’s another story.)
If we turn any usage of deprecated APIs into a hard error, we may run into problems if users of our software upgrade their Qt installation to a higher version than the one we used to develop it: our software may fail to compile on the higher version of Qt because it may use an API deprecated in that higher version but not in the one we used. And that just makes our users not happy.
Recommendation: set QT_DISABLE_DEPRECATED_BEFORE to the highest version of Qt that you developed and tested your software against.
Note that this comes with a cost: any usage of deprecated APIs will cause compile errors, which must therefore be fixed immediately. If we need the software to still compile while porting away from the deprecated API, keep reading…
QT_DEPRECATED_X is a weaker form of QT_DEPRECATED_SINCE — it lets your code compile, but makes the compiler generate warnings if deprecated APIs are used. Again, this is disabled by default; in order to actually have the compiler emit warnings, this feature needs to be explicitly enabled by defining the QT_DEPRECATED_WARNINGS macro:
DEFINES += QT_DEPRECATED_WARNINGS # warn on usage of deprecated APIs
Recommendation: always have the QT_DEPRECATED_WARNINGS macro defined.
We can even combine the QT_DEPRECATED_X and the QT_DEPRECATED_SINCE macros, for maximum effect:
# warn on *any* usage of deprecated APIs, no matter in which Qt version they got marked as deprecated ... DEFINES += QT_DEPRECATED_WARNINGS # ... and just fail to compile if APIs deprecated in Qt <= 5.6 are used DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x050600
That’s it!
During QtCon 2016 we realized that many Qt users do not know about these macros and do not enable them. For this reason, apart from writing this blog post, I decided to invest a little time trying to make them more visible when using Qt tooling.
Since Creator 4.2 you will find these macros enabled by default when creating a new project (commit); the same will happen when using qmake -project in Qt 5.9 (commit).
That is a very hard question to answer. At KDAB we have lots of experience with Clang-based refactoring and porting tools, and we invest considerable time to tune them for both opensource projects (such as clazy) and customer-related work.
In general, we can expect some help coming from tooling, especially when the porting involves some minimal and straightforward refactoring. In other cases, things may not be so simple (cf. the comments in Marc’s blog post about Q_FOREACH), and tooling will help only in a percentage of cases.
Anyhow, stay tuned for more news around this!
Deprecating APIs is a natural thing in software development. As more brand new features get added to upcoming versions of Qt, some old feature may get marked as deprecated.
Luckily, we know that thanks to the Qt source compatibility promise our software will not break; and we can easily know which deprecated APIs we are using in our projects by enabling the relative warnings or errors.
KDAB is a consulting company offering a wide variety of expert services in Qt, C++ and 3D/OpenGL and providing training courses in:
KDAB believes that it is critical for our business to contribute to the Qt framework and C++ thinking, to keep pushing these technologies forward to ensure they remain competitive.
The post Un-deprecate your Qt project appeared first on KDAB.
|
|
In this article, I will take a look at one of the fundamental concepts introduced in Alex Stepanov and Paul McJones’ seminal book “Elements of Programming” (EoP for short) — that of a (Semi-)Regular Type and Partially-Formed State.
Using these, I shall try to derive rules for C++ implementations of what are commonly called “value types”, focusing on the bare essentials, as I feel they have not been addressed in sufficient depth up to now: Special Member Functions.
Alex Stepanov and Paul McJones gave us a whole new way of looking at this, with a mathematical theory of types and algorithms quite unlike anything ever done before. Their achievement will forever change the way you look at computer programming, but eight years after its publication, the book still does not get the widespread adoption it deserves.
Special Member Functions, of course, are those member functions of a C++ object that the compiler can write for you: The default constructor, the copy and move constructors, the copy and move assignment operators and the destructor.
A Regular Type in EoP roughly corresponds to the EqualityComparable combined with the CopyConstructible C++ concept, see the book for more details.
A C++ Value Type is a type that is defined by its state, and its state alone (note that EoP has a very different definition of value type). Take an int as an example. Two int objects of value 5 will behave identical under all regular operations (simplified: all operations except for taking the object’s address). Two Shape objects, however, both having the same position, color, texture, … still may end up a square and a triangle when drawn on screen. A Shape object is defined by its behaviour as much as its state. We call such types polymorphic.
There are many shades of grey in between those two extremes; let’s leave it at that crude distinction. See Designing value classes for modern C++ – Marc Mutz @ Meeting C++ 2014 for a somewhat more thorough treatment.
In this article, we will look at two different classes, Rect and Pen, and try to write their Special Member Functions hopefully as Stepanov would have us do.
The first, Rect, is simple: it’s an integral-coordinate rectangle class that we will define completely inline in the header file. Pen, however, will be quite a bit different: It will use the Pimpl Idiom to firewall its internals from users. See Pimp My Pimpl and Pimp My Pimpl — Reloaded for more on the idiom.
class Rect {
int x1, y1, x2, y2;
public:
};
class Pen {
class Private; // defined out-of-line
Private *d;
public:
};
The first task for today is to write the default constructor.
EoP has this to say about the default constructor:
[It] takes no arguments and leaves the object in a partially-formed state.
Ok, so what’s a “partially-formed state”? Here comes the good part:
An object is in a partially-formed state if it can be assigned-to or destroyed.
The authors go on to say that any other operation on partially-formed objects is undefined. In particular, such objects do not, in general, represent a valid value of the type.
The motivation for EoP to require default-construction in the first place is programmer convenience: T a = b; should be equivalent to T a; a = b;, and the user of the type should get to choose whether to write
T a; if (cond) a = b; else a = c;
or
T a = (cond) ? b : c;
Without default construction, if all the type’s author gave are user-defined constructors that establish a valid value, the programmer would have to use the ternary operator, whether or not that fits with line length limitations and personal preferences.
So, let’s try write something for Rect:
class Rect {
int x1, y1, x2, y2;
public:
Rect() = default;
};
What do you think? Would you have written the Rect default constructor this way?
I can tell you I wouldn’t have. Not until EoP opened my eyes. Remember that EoP only requires that the default constructor establish a partially-formed state, not a valid value. This should not surprise you. When in C++, do as the ints do:
int x; Rect r;
In both cases, any use of the default-constructed object other than assignment or destruction is undefined, because the values of the objects are undefined (uninitialised).
If you feel uncomfortable with this implementation, you’re letting your inner Java programmer get the better of you. Don’t. This is C++. We embrace the undefined.
And, as Howard Hinnant writes in a reddit comment on this article, we give power to our users:
int x = {}; // x == 0
Rect r = {}; // r == {0, 0, 0, 0}
Next, let’s try Pen.
class Pen {
class Private; // defined out-of-line
Private *d;
public:
Pen() : d(nullptr) {} // inline
~Pen() { delete d; } // out-of-line
};
Should we have left Pen::d uninitialised, too?
No. Doing so would make destruction undefined.
Should we have newed a Pen::Private object into Pen::d in the default constructor?
That would be a no, too. We’re not required to establish a valid value in the default constructor, so in the spirit of “don’t pay for what you don’t use”, we only do the minimal work necessary to establish a partially-formed state.
To hammer this one home: Should an implementation of
Colour Pen::colour() const;
check for d == nullptr?
No the third. You can see at a glance in the source code whether an object is in a partially-formed state. There is no need for a runtime check, except for debugging purposes.
From the above, it follows that your default constructors should be noexcept. If your default constructors throw, they do too much. Of course, we’re still talking Value Types here, so let no man say that yours truly told you to make the default constructors of your RAII types noexcept.
For Rect, moving and copying are the same thing, and the compiler is in the best position to implement them for you:
class Rect {
int x1, y1, x2, y2;
public:
Rect() = default;
// compiler-generated copy/move special member functions are ok!
};
Once more, Pen is a bit more interesting:
class Pen {
class Private; // defined out-of-line
Private *d;
public:
Pen() noexcept : d(nullptr) {} // inline
Pen(Pen &&other) noexcept : d(other.d) { other.d = nullptr; } // inline
~Pen() { delete d; } // out-of-line
};
We put moved-from Pen objects into the partially-formed state. In other words: moving from an object has the same effect as default-construction. Can it get any simpler?
We delegate move-assignment to the move constructor:
class Pen {
class Private; // defined out-of-line
Private *d;
public:
Pen() noexcept : d(nullptr) {} // inline
Pen(Pen &&other) noexcept : d(other.d) { other.d = nullptr; } // inline
Pen &operator=(Pen &&other) noexcept // inline
{ Pen moved(std::move(other)); swap(moved): return *this; }
~Pen() { delete d; } // out-of-line
void swap(Pen &other) noexcept
{ using std::swap; swap(d, other.d); }
};
Note how all special member functions except the destructor are inline so far, yet we didn’t break encapsulation of the Pen::Private class.
Thanks in no small part to the ISO C++ standard, which describes moved-from objects (in [lib.types.movedfrom]) as follows:
Objects of types defined in the C++ standard library may be moved from. Move operations may be explicitly specified or implicitly generated. Unless otherwise specified, such moved-from objects shall be placed in a valid but unspecified state.
the simple chain of reasoning described so far has less friends than you might think. And this is why I wrote this article.
You will probably meet a lot of resistance when trying to implement your default and move constructors this way. But think about it: What would a natural “default value” of your type be?
It’s easy to fall for the next-best choice: For int, surely the default-constructed value should be zero, and we just have to put up with this partially-formed, nay: uninitialised, values because C sucks.
I disagree. If you are using the int additively, then, yes, zero is a good default value. But if you work with multiplication, then one would be the better fit.
Bottomline: for the vast majority of types, there is no natural default. If there isn’t, then having to establish a randomly-chosen one on every default-construction operation is wasteful, so don’t do it.
Instead, have the default constructor establish only a partially-formed state, and provide literals (or named factory functions for something more complex) for the different “default” values:
class Rect {
static constexpr Rect emptyRect = {};
};
class Pen {
static Pen none();
static Pen solidBlackCosmetic();
};
Partially-Formed Objects are nothing magical. They offer a simple description of the behaviour of C++ built-in types with respect to default construction, and of pimpl’ed objects with respect to move semantics, if implemented in the natural way.
In both cases, partially-formed objects are easily spotted in source code with local static reasoning, so demands for anything more fancy than the bare minimum as the result of moving from an object or default-constructing one are violating the C++ principle of “don’t pay for what you don’t use”. As a corollary, keep your default constructors noexcept.
In a future instalment, we will look at a smart pointer that encodes these guidelines for use as a pimpl-pointer.
The post Stepanov-Regularity and Partially-Formed Objects vs. C++ Value Types appeared first on KDAB.
The second stable release for KIO GDrive is now available (version is 1.1.0). The only user-visible change is the new Google Drive entry in the Dolphin’s Network folder:
The Network folder can be reached from Plasma’s Folder View widgets as well:
This replaces the custom .desktop file shipped by kio-gdrive 1.0.x, which used to open Dolphin on the gdrive:// location.
One problem with this new approach is that the Network “folder” is actually provided by a kioslave, which currently lives in plasma-workspace. This means that if you use Dolphin from, say, Gnome Shell then Network will probably not work.
The proper fix is moving this ioslave from plasma-workspace to kio, but it’s not trivial because Plasma and Frameworks have different release schedules, and also because in general moving things around is painful. I already made a patch but it got stuck, possibly because of Plasma 5.9 deadlines. I’ll clean it up and revamp it in the next weeks, hopefully.
I also want to thank Andreas for the new gdrive icon that you see in the screenshots above. You need breeze-icons 5.29 or later to get it.
Today KDE is proud to announce the immediate availability of the KDE Slimbook, a KDE-branded laptop that comes pre-installed with Plasma and KDE Applications (running on Linux) and is assured to work with our software as smoothly as possible.
The KDE Slimbook allows KDE to offer our users a laptop which has been tested directly by KDE developers, on the exact same hardware and software configuration that the users get, and where any potential hardware-related issues have already been ironed out before a new version of our software is shipped to them. This gives our users the best possible way to experience our software, as well as increasing our reach: The easier it is to get our software into users' hands, the more it will be used.
Furthermore, the KDE Slimbook, together with KDE neon, offers us a unique opportunity to isolate and fix issues that users have with our software. When something in Plasma, a KDE Application or some software using a KDE Framework does not work as intended for a user, there are at least three layers that can cause the problem:
Of course KDE always tries to reduce bugs in our software as much as possible. Problems can occur in any of the aforementioned layers, however, and often times it is difficult for us to pin-point exactly where things are going wrong. Last year, KDE neon joined the KDE community with the promise to give us control over the operating system layer. This does not mean we won't make our software available on other distributions or operating systems, of course, but it allows us to eliminate that layer as a possible source of a problem.
This left us still with one layer we had zero control over, though: The hardware layer.
Fast-forward to late last year, when the Spanish laptop retailer Slimbook approached KDE with the idea to offer KDE-branded laptops that come pre-installed with Plasma and KDE Applications. We were excited about the idea, and put our designers and developers to the task of creating a branding for such a device and making sure that KDE neon runs without any hardware-related issues on it.

For now, the KDE Slimbook will always come pre-installed with KDE neon, but we are open to offering other distributions that come pre-installed with Plasma for customers to choose from.
The KDE Slimbook is for people who love KDE software, regardless of whether or not they are active contributors to KDE.
For more information, visit the KDE Slimbook website.
After a lot of VM’s and a lot of patience(or not), I was able to Craft AtCore on Windows.
Craft is the evolution of Emerge, a tool that KDE developed to cross compile KDE applications to Windows and Mac. Since the goal of Atelier project is to reach all those environments, we need to use it.
So far on my knowledge, Craft is mostly written in Python. A lot of scripts to manage KDE frameworks, Qt, and other dependencies.
Since Atelier isn’t ready, for now just AtCore was craft.
The first step was to write a Recipe for AtCore inside Craft. The recipe is the script that will manage the repository and dependencies that the project needs to be built.
AtCore dependencies are Extra CMake Files, QtBase, QtSerialPort and Solid.
Extra CMake Files is a KDE module to add more features to CMake. QtBase and QtSerialPort are the Qt part of the project and Solid is to manage to connect/disconnect of serial devices on the host. So this is the code:
import info from CraftConfig import * class subinfo( info.infoclass ): def setTargets( self ): self.svnTargets[ 'master' ] = '[git]kde:atcore|master' self.defaultTarget = 'master' self.shortDescription = "the KDE core of Atelier Printer Host" def setDependencies( self ): self.buildDependencies["frameworks/extra-cmake-modules"] = "default" self.dependencies["libs/qtbase"] = "default" self.dependencies["libs/qtserialport"] = "default" self.dependencies["frameworks/solid"] = "default" from Package.CMakePackageBase import * class Package( CMakePackageBase ): def __init__( self ): CMakePackageBase.__init__( self )
After that, we need to setup the Windows environment. So I used an Oracle Virtual Box with a Windows 10 iso. I setup like 6 VM’s until I got success. In the beginning, I had an issue related to some libs missing, but that was my mistake, but since I thought that I screwed up things, I did a new VM. But then I started to have some Python issues, that neither me or Hannah(The maintainer of Craft) could discover why. But for some reason after a new VM configured I was able to install Craft. And after that my dummiest mistake. On Craft manual says: “In order to compile the Qt5 qtbase package with MinGW you will also need to install the Microsoft DirectX SDK, make sure to open a new command line window after the installation.” However, for some reason, I just saw that I need it to install DirectX, forgot totally about the SDK part. When I finally was able to read the output of the error, I saw what I was missing. And then, I left my laptop(4G, I5, 2 cores for the VM) burning for like 3 hours to compile everything.

Living dangerously
Those are the steps that I did to be able to Craft AtCore, if you want to test it, please follow the instructions:
Set-ExecutionPolicy RemoteSigned
iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/KDE/craft/master/setup/install_craft.ps1'))
After that you should be setup. So on R:/bin you should find the AtCoreTest.exe to run.
I only was able to run the bin with Craft environment active. If you shutdown your PC, when it goes on you need to open the PowerShell again and run:
C:\KDE\craft\kdeenv.ps1
I was able to make my Arduino Mega a shared device in the VBox and it appeared on AtCore, however we discovered a problem with the path used to install the plugins of the Firmwares, and they weren’t found by AtCoreTest, so I wasn’t able to load a plugin, but I can send unitary commands to the Arduino and worked fine. The Atelier team already discover what was causing the problem, so it will be fixed soon.
Note: If you want to test it, it’s on you. =D
Please join our channel on freenode IRC #kde-atelier or our group at Telegram and give us feedback!
All I ask of you is one thing: please don’t be cynical. I hate cynicism — it’s my least favourite quality and it doesn’t lead anywhere. Nobody in life gets exactly what they thought they were going to get. But if you work really hard, and you’re kind, amazing things will happen.
-- Conan O'Brien, The Tonight Show with Conan O'Brien, 22 January 2010
Conan O'Brien was, for lack of a better term, screwed over by NBC. The Tonight Show, the pinnacle of late-night television and the one show that every television personality wants to host, was Conan's for just under a year. He took over from Jay Leno, the man who had hosted it since May of 1992, on June 1st, 2009. His last episode was aired on the 22nd of January the next year.
Conan was a writer for The Simpsons before he became a television personality hosting his very own show, Late Night with Conan O'Brien. Conan and Jay were both ratings leaders for their respective time slots. Conan had been promised that he would take over from Jay Leno for almost 10 years, and Jay had been told during his renewal in 2004 that this would be his last 5 years hosting the show.
It all went horribly wrong1.
Conan's viewer demographic was vastly different from Jay Leno's, and crucially, somewhat smaller. Faced with reducing viewership, NBC gave Jay Leno his own show just before the Tonight Show. This backfired, crashing ratings for both the Tonight Show and The Jay Leno Show.
NBC's solution to this problem was to move The Jay Leno Show to the Tonight Show's timeslot, moving the Tonight Show further back into the Late Show's timeslot. This would bring the status-quo back to the Tonight Show with Jay Leno era, and Conan would be left hosting the Tonight Show just in name.
Conan decided to not play along with NBC. In a statement issued during the height of the crisis, he said he "would not participate in the destruction of the Tonight Show." Just over 7 months after starting any television host's dream job, he left the show. And in his final closing monologue, he said this.
Halfway around the world, on a small television screen in Kolkata, I watched it live. I was 15, and was I just beginning to go enter some very difficult times. This monologue would end up being burned into my brain forever, and all my values I would hence develop would now be based around these words.
I've had a privileged life so far, and there's no doubt about it. That is not to say, however, that it was a happy life.
Some of the sadness was chemical. Taking after my family's history, I was diagnosed with Generalised Anxiety Disorder, Social Anxiety and Major Depressive Disorder, and prescribed Paroxetine, a strong antidepressant in my first year of college. I bought the medicine and took it to college with me. I didn't get to take any, however, because I found a couple of friends in the nick of time who counselled me through my darkest times.
In my first year of college I had no sleep cycle. I would sleep for an hour a night for a week, and then I would sleep nearly twenty hours a day for the next. I couldn't think straight, I couldn't be productive. I couldn't concentrate, I couldn't study, and I wasn't scoring the way I wanted to in my courses. I had less than no self-esteem. And for a time, I was impulsively suicidal.
This year put a fear of failure into me. This wasn't an unfounded fear, like the rest of my fears that my anxiety had convinced me were worth attending to. I had seen some very hard financial times in my family and I had to make sure I didn't end up in the same boat. What the anxiety did to me was turn that from one of the factors I would take into account my planning my life ahead, into the only thing that mattered: I could not fail.
In school, I would study only the minimum amount required to pass my exams, but the way my parents would scold me for spending all my time in front of a computer and not studying would play at the back of my mind for a long time. I never dated while in school or in college, because it had been driven into me that a girlfriend would simply be a distraction and would take away the minimum amount of concentration that I still had for my studies.
I tried all throughout my first year and the next half to improve myself academically. I just couldn't do it. I sincerely believed I was at fault, that I was being distracted by the Internet, by social media, and my other hobbies. It took me until the end of 2014 to realise that I just didn't have the mental make-up to be an academic, but that wasn't the end of the world; I could still be successful. It helped that I gave success a definition: I wanted a certain kind of life, and if I could have it, I would call myself successful. Success would be different things at different times, but I promised myself that as long as I could meet the milestones I had set for myself for that particular time, I would not beat myself up.
But I was still depressed and mentally fogged, and I needed a kick of inspiration to actually make me follow through on my plans. That came from a man whom I would only get to meet more than a year and a half later, but whom I would see as an idol, as someone who set the standard for the kind of engineer I would like to be. He was almost four thousand miles away from me, in Germany, and he had a blog. His name was Martin Gräßlin2.
Martin was, and as of writing this, still is, the maintainer of KWin, the Window Manager used by Plasma Desktop. Plasma, KWin and a lot of other software is developed by volunteers worldwide, who organise themselves into a community and a support group, called KDE. KDE used to stand for Kool Desktop Environment, the product these volunteers created, but eventually KDE just became KDE and denoted the community of people, not the product.
I had been a Linux user since I was 13, and used to write from time to time for a local magazine called Linux for You (now called Open Source for You). I used to subscribe to a few people from various open source software projects, and Martin was one of the people I was following on Google Plus.
At that time, KWin was undergoing major overhauls, to accommodate the shift from X11, the decades-old standard that was used to implement graphical user interfaces in Linux, to Wayland, the newer, faster, leaner and more secure way of producing nice images on the screen. Martin was doing some groundbreaking work during that time - he basically had to re-invent bits and pieces of X11's functionality and put them into KWin. All this while, he used to blog about his approach to solving problems, his thought processes, and the actions he took as a result of his analyses. I was studying to be a computer engineer, and what he wrote gave me an unique insight into how an actual computer engineer functioned.
It was a kind of glamour I instantly craved. But because of my self-esteem levels, for a long time I thought this was something I would only watch from a distance, never participate in, because I just wasn't good enough and never could be.
This changed in the February of 2015. A Dengue Fever scare (I didn't actually contract it) forced me to stay home with a high fever for three weeks. I was pretty good at C++ by then, and also had built a small tool to proxy DNS requests over HTTP ports and basically blow right past every single firewall that our university had in place to prevent us from accessing certain content, using Qt5. During this fever-induced downtime, I contemplated looking at some KDE code, but was always limited by my own lack of confidence - I knew I just wouldn't be able to contribute at all.
Then there was one astute moment of clarity, which I distinctly remember, when I woke one day, very late, and thought, "I am a computer engineer. If I'm not able to actually do this, I don't have a career."
In the next 4 months, I re-built KDE's screenshooting utility from scratch. Called KSnapshot-Next, then KScreenGenie, then Kapture, and then Spectacle by the time of its first KDE Applications release by the end of the year, the amount of things I learnt by the time I finished building the core feature set was more than I had learned in the past decade about computer science and coding. I was brimming with confidence and took on new roles within KDE without a second thought to my abilities. Writing some of the backend code from Spectacle finally gave me a chance to work with my idol - I would constantly have to bug Martin to figure out low-level details about X11 and the xcb library.
The secret sauce? The KDE Community. Some of the friendships that were forged in the IRC channels during my Spectacle days were the difference between life and death for me. Little did I know I was just about to enter a crisis period that would last me almost until the end of college, and my friends in KDE were like a second family to me at a time when I seriously expected to no longer have my first one.
And the fact that I now live in München is a direct consequence of my abandoning my academic ambitions and spending all my later university years with KDE.
Being a part of KDE would not only rescue me from clinical depression, it would also give me a career. But before that, it would give me my first and second foreign travel opportunities.
The first time I ever went outside India was to Berlin for Akademy, KDE's annual world conference in Europe, in the last week of August. I was operating out of a friend's house in Gurgaon at that time, and when I left the house for the airport that night, I still hadn't thought of a life beyond the next week. What was to happen at the airport would change that.
There would be a person whom I would meet at the airport that night, whom I would end up spending every waking moment in Germany with when I wasn't at the conference, and who would change the way I used to think, used to reason, and the things I believed forever.
And apart from that person, meeting all those people whom I had only interacted with online, and who had held my hand and travelled with me through my journey in KDE so far, while attending the conference the entire day, attending parties and dinners in the evening and exploring the city at night would leave me mentally and physically exhausted for nearly two months.
During those two months, my major depression diagnosis was reconfirmed, but I again decided to ride this episode out without medicines. It took me another trip, this time to San Francisco to meet more KDE friends at Google's offices in Silicon Valley to end this episode. But this time I hadn't lost one of my powers - mental clarity. While during this episode I cried after almost a decade of never shedding a tear, after that day I could still think without despairing. And I knew one thing: I had to go back to Germany.
So I started job hunting, but it was another friend from KDE who scored me an interview at his company. It was the first video interview of my life, me sitting at my friend's house in Gurgaon and my future boss interviewing me from the office at Munich. At the end of that interview I got up feeling a genuine inner happiness that I hadn't felt for years. I wasn't a complete idiot.
As it turned out, I had applied to another German company for a work-from-home job, and the morning that I was supposed to leave for San Francisco, the folks from eGym called to confirm that I had indeed got the job and that they would be coming back to me with an offer soon. That same afternoon, the other company also confirmed that they would offer me a job, this time with a salary offer in place.
It wasn't until I had passed through security at the airport that night and was waiting for the flight to Amsterdam to board that I had the time and mental faculties to think about what had transpired over the last twelve hours. It took me half an hour to decide that I would be taking the eGym job, even if I were to get paid less than half the salary I would be paid at the other one, for the simple reason that I would be able to realise my dream, one that was nearly 3 years in the making, of living in Germany.
After returning from San Francisco, I spent the next two months getting all the paperwork and the money together in order to be able to make the move to Munich. And finally, on the 9th of January, I left India, allowing myself a small week-long holiday in Paris to recover from the last three and a half years, before finally moving to Munich to start my new life.
It has been more than a decade since my adolescence brought along my inner turmoils, which perfectly coincided with a shift in dynamics in the family that would end up leaving me with adult responsibilities during a time I were to experience my teenage years. I don't regret any of it -- if anything, what I learnt during that time has helped me almost instantly find a balance in my life here in Munich.
But I'm finally happy. Happy to be living life on my own terms, happy to be living amongst some very good friends, and incredibly, happy to be living in a country where I don't feel like an outsider. It is true, I did feel like an outsider in India, being just simply unable to connect with the country's sentiments and ways of life. I finally feel at home.
And work is awesome. I actually wake up early in the morning every day and look forward to going to office. And what's more, my boss is also ex-KDE!
Here's to the next six months of my life, after which I still have to figure out where I'm going to go, if anywhere, next.
This year, we've got elections in the Netherlands. Which means, I have to choose where my vote goes. And that can be a trifle difficult.
After fifteen years in the free software world, I'm a certified leftie. I'm certainly not going to vote for the conservative party (CDA, formally Christan, been moving into Tea Party territory for a couple of years now), I'm not going to vote for the Liberal Party (VVD) -- that's only the right party for someone who has got more than half a million in the bank. Let's not even begin to talk about the Dutch Fascist Movement (PVV). The left-liberals (D66) are a bit too much anti-religion, and, shockingly, being a sub-deacon in the local Orthodox Church, I don't feel at home there. That leaves, more or less, the Socialist Party, the Labour Party and the United Christan party. The Socialist Party has never impressed me with their policies. That leaves two...
Yeah, you know, I'm a Christan. If someone's got a problem with that, that's their problem. I'm also a socialist. If someone's got a problem with that, that's their problem. If someone thinks I'm an ignorant idiot because of either, their problem.
But today, the Labour Party minister for international cooperation, Lilianne Ploumen, has announced an effort to create a fund to counter Trump's so-called "global gag rule". That means that any United States-funded organization which so much as cooperates with any organization involved in so-called "family planning" will lose its funding. She is working to restore the funding.
News headlines make this all about abortion... Which is in any case not something anyone with testicles should concern themselves with. But it isn't that, and just talking about abortion makes it narrow and easy to attack. As did our local United Christans party, which will never again receive my vote. It's also about education, it's also about contraceptives, it's about helping those Nepali teenage girls who are locked in a cow shed because they're menstruating. It's about helping those girls who get raped by their family get back to school.
It's about making the world a better and safer and healthier place for the girls and women who cannot defend themselves.
And I don't have to worry about my vote anymore. That's settled.
|
|
With the Qt 5.8 release, we have added QtNetworkAuth as a Technology Preview module. It is focused on helping developers with this auth******** madness. Currently, it supports OAuth1 and OAuth2. In the future, it will feature more authorization methods.
This post is a first glance of the OAuth2 support in Qt, it covers how to use Google to authorize an application. Your application will be able to show the typical log-in/authorize app screen, just like a web application (NOTE: A browser or a webview is needed):

.
The IETF defines OAuth 2.0 as:
The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf.
OAuth authorization is also a requirement to use the Google APIs and access user information stored in Google Services like Gmail, Drive, Youtube, Maps, and others.
If you are interested in how to create an application using OAuth2, please continue reading.
Before start writing code, you need to register a new project in the service provider. This step is necessary because you have to request the client credentials to identify your application. The following steps are based on Google API Console but they are similar to other providers.
To start the project registration, access Google API Console, then, press “Create Project”.

A “New Project” dialog appears. Type the name of your project and click the “Create” button. Now you are ready to manage your new project.

Your application is registered, the dashboard is updated and you can set different options from this screen.

You have to get credentials to identify the application. Go to the “Credentials” section in the dashboard and press the “Create credentials” button. You need to choose which type of credentials you need. Choose “OAuth client ID”.

In the next screen, you can configure a “consent screen” with your Email address, product name, homepage, product logo, … Fill it with your data and press the “Save” button.
Go back to the previous “Create client ID” screen to choose the “Application type”. Different options are available here; you need to choose “Web application”. Give it a name.

Under “Restrictions” there is an important field: the “Authorized redirect URIs”.
If the user chooses to allow your application to manage his data. The application has to receive an “access token” to be used during authenticated calls. The “access token” can be received in several ways, but the most common way is to receive a call to your web server with the “access token”. In web applications, a server path to handle the notification is enough. A desktop application need to fake it using a local browser.
Add a URI accessible by your Internet browser including the port if you are using a different one than 80. When the application is running, a basic HTTP server will receive information from the browser.
Most of the times you need to use a URI like: “http://localhost:8080/cb”.
NOTE: The path “/cb” is mandatory in the current QOAuthHttpServerReplyHandler implementation.
NOTE: You can configure different URIs. In this example, a single URI is assumed.
End the process by pressing the “Create” button and you will see a new credential in the list of credentials with an “Edit”, “Delete” and “Download” buttons at the right. Click the download button and… Finally, you get a JSON file ready to parse!

In the screenshot above you can see some new URIs and the client_id and the client_secret. There is no need to use the JSON file, you can hardcode this information directly in your application.
I will omit the part of defining the class and show the relevant code.
In your code create a QOAuth2AuthorizationCodeFlow object:
auto google = new QOAuth2AuthorizationCodeFlow;
Configure the scope you need to access from the application. The scope is the desired permissions the application needs. It can be a single string or a list of strings separated by a character defined by the provider. (Google uses the space character as separator).
NOTE: The scopes are different depending on the provider. To get a list of scopes supported by Google APIs click here.
Let’s use the scope to access the user email:
google->setScope("email");
Connect the authorizeWithBrowser signal to the QDesktopServices::openUrl function to open an external browser to complete the authorization.
connect(google, &QOAuth2AuthorizationCodeFlow::authorizeWithBrowser,
&QDesktopServices::openUrl);
Parse the downloaded JSON to get the settings and information needed. document is an object of type QJsonDocument with the file loaded.
const auto object = document.object(); const auto settingsObject = object["web"].toObject(); const QUrl authUri(settingsObject["auth_uri"].toString()); const auto clientId = settingsObject["client_id"].toString(); const QUrl tokenUri(settingsObject["token_uri"].toString()); const auto clientSecret(settingsObject["client_secret"].toString()); const auto redirectUris = settingsObject["redirect_uris"].toArray(); const QUrl redirectUri(redirectUris[0].toString()); // Get the first URI const auto port = static_cast<quint16>(redirectUri.port()); // Get the port
After parsing the file configure the google object.
google->setAuthorizationUrl(authUri); google->setClientIdentifier(clientId); google->setAccessTokenUrl(tokenUri); google->setClientIdentifierSharedKey(clientSecret);
Create and assign a QOAuthHttpServerReplyHandler as the reply handler of the QOAuth2AuthorizationCodeFlow object. A reply handler is an object that handles the answers from the server and gets the tokens as the result of the authorization process.
auto replyHandler = new QOAuthHttpServerReplyHandler(port, this); google->setReplyHandler(replyHandler);
The grant function will start the authorization process.
google->grant();
If everything was OK, you should receive a QOAuth2AuthorizationCodeFlow::granted signal and start sending authorized requests.
You can try sending a request using https://www.googleapis.com/plus/v1/people/me
auto reply = google->get(QUrl("https://www.googleapis.com/plus/v1/people/me"));
It will give you a QNetworkReply and when QNetworkReply::finished is emited you will be able to read the data.
To be continued…
The post Connecting your Qt application with Google Services using OAuth 2.0 appeared first on Qt Blog.
Only 21 days after the last stable release and some huge progress was made.
The first big addition is a contribution by Matthias Fehring, which adds a validator module, allowing you to validate user input fast and easy. A multitude of user input types is available, such as email, IP address, JSON, date and many more. With a syntax that can be used in multiple threads and avoid recreating the parsing rules:
static Validator v({ new ValidatorRequired(QStringLiteral(“username”) });
if (v.validate(c,Validator::FillStashOnError)) { … }
Then I wanted to replace uWSGI on my server and use cutelyst-wsgi, but although performance benchmark shows that NGINX still talks faster to cutelyst-wsgi using proxy_pass (HTTP), I wanted to have FastCGI or uwsgi protocol support.
Evaluating FastCGI vs uwsgi was somehow easy, FastCGI is widely supported and due a bad design decision uwsgi protocol has no concept of keep alive. So the client talks to NGINX with keep alive but NGINX when talking to your app keeps closing the connection, and this makes a huge difference, even if you are using UNIX domain sockets.
uWSGI has served us well, but performance and flexible wise it’s not the right choice anymore, uWSGI when in async mode has a fixed number of workers, which makes forking take longer and user a lot of more RAM memory, it also doesn’t support keep alive on any protocol, it will in 2.1 release (that nobody knows when will be release) support keep alive in HTTP but I still fail to see how that would scale with fixed resources.
Here are some numbers when benchmarking with a single worker on my laptop:
uWSGI 30k req/s (FastCGI protocol doesn’t support keep conn)
uWSGI 32k req/s (uwsgi protocol that also doesn’t support keeping connections)
cutelyst-wsgi 24k req/s (FastCGI keep_conn off)
cutelyst-wsgi 40k req/s (FastCGI keep_conn on)
cutelyst-wsgi 42k req/s (HTTP proxy_pass with keep conn on)
As you can see the uwsgi protocol is faster than FastCGI so if you still need uWSGI, use uwsgi protocol, but there’s a clear win in using cutelyst-wsgi.
UNIX sockets weren’t supported in cutelyst-wsgi and are now supported with a HACK, yeah sadly QLocalServer doesn’t expose the socket description, plus another few stuff which are waiting for response on their bug reports (maybe I find time to write and ask for review), so I inspect the children() until a QSocketNotifier is found and there I get it. Works great but might break in future Qt releases I know, at least it won’t crash.
With UNIX sockets command line options like –uid, –gid, –chown-socket, –socket-access, as well as systemd notify integration.
All of this made me review some code and realize a bad decision I’ve made which was to store headers in lower case, since uWSGI and FastCGI protocol bring them in upper case form I was wasting time converting them, if the request comes by HTTP protocol it’s case insensitive so we have to normalize anyway. This behavior is also used by frameworks like Django and the change brought a good performance boost, this will only break your code if you use request headers in your Grantlee templates (which is uncommon and we still have few users). When normalizing headers in Headers class it was causing QString to detach also giving us a performance penalty, it will still detach if you don’t try to access/set the headers in the stored form (ie CONTENT_TYPE).
These changes made for a boost from 60k req/s to 80k req/s on my machine.
But we are not done, Matthias Fehring also found a security issue, I dunno when but some change I did break the code that returned an invalid user which was used to check if the authentication was successful, leading a valid username to authenticate even when the logs showed that password didn’t match, with his patch I added unit tests to make sure this never breaks again.
And to finish today I wrote unit test to test PBKDF2 according to RFC 6070, and while there I noticed that the code could be faster, before my changes all tests were taking 44s, and now take 22s twice as fast is important since it’s a CPU bound code that needs to be fast to authenticate users without wasting your CPU cycles.
Get it! https://github.com/cutelyst/cutelyst/archive/r1.3.0.tar.gz
Oh and while the FreeNode #cutelyst IRC channel is still empty I created a Cutelyst on Google groups: https://groups.google.com/forum/#!forum/cutelyst
Have fun!
While we’re still working on Vector, Text and Python Scripting, we’ve already decided: This year, we want to spend on stabilizing and polishing Krita!
Now, one of the important elements in making Krita stable is bug reports. And we’ve got a lot of those! But with some bug reports, we’re kind of stuck. We cannot figure out what type of hardware or drivers it is that is causing these bugs, so we’re asking for you help.
We’ve made a Krita user survey.
In it, we ask things like what type of hardware you have, and whether you have trouble with certain hardware. That way we can figure out which drivers and hardware are problematic and maybe get workarounds. There’s also some other questions, like what you make with Krita and how you get your Krita news.
Recently, the KSyntaxHighlighting framework was added to the KDE Frameworks 5.29 release. And starting with KDE Frameworks 5.29, KTextEditor depends on KSyntaxHighlighting. This also means that KTextEditor now queries KSyntaxHighlighting for available xml highlighting files. As such, the location for syntax highlighting files changed from $HOME/.local/share/katepart5/syntax to
$HOME/.local/share/org.kde.syntax-highlighting/syntax
So if you want to add your own syntax highlighting files to Kate/KDevelop, then you have to use the new location.
By the way, in former times, all syntax highlighting files were located somewhere in /usr/share/. However, since some time, there are no xml highlighting files anymore, since all xml files are compiled into the KSyntaxHighlighting library by default. This leads to much faster startup times for KTextEditor-based applications.
If you build Kate (or KTextEditor, or KSyntaxHighlighting) from sources and run the unit tests (`make test`), then the location typically is /$HOME/.qttest/share/org.kde.syntax-highlighting/syntax.
Qt has provided support for state machine based development since introduction of Qt State Machine Framework in Qt 4.6. With the new functionality introduced in Qt 5.8 and Qt Creator 4.2 state machine based development is now easier than ever before.
Qt 5.8 introduces fully supported Qt SCXML module that makes it easy to integrate SCXML based state machines into Qt. Previously SCXML has been imported to Qt from external tools, which is still possible. Now Qt Creator 4.2 introduces a new experimental visual state chart editor that allows creation and modification of state charts directly with Qt Creator IDE. Together with the new editor and other improvements in Qt Creator, state machine based development can be done completely within Qt Creator.
Here is a short screen cast that shows these new features in action. For demonstration purposes, the simple state machine driven example application with Qt Quick user interface Traffic Light is being recreated from scratch.
Note that the the editor is still experimental with Qt Creator 4.2 and the plugin is not loaded by default. Turn it on in Help > About Plugins (Qt Creator > About Plugins on macOS) to try it.
The post Qt SCXML and State Chart Support in Qt Creator appeared first on Qt Blog.

Good day. My name is Adam and I am a 26-year-old person who is trying to learn how to draw…
Hobby 
I try to draw everything, I don’t want to get stuck in drawing only one thing over and over again and leave behind everything else.
People who inspired me when i was younger … much younger … were Satoshi Urushihara, Masamune Shirow and DragonBall artists.
My first adventure with digital painting was about 4-5 years ago, when I bought my first small Wacom Bamboo tablet that I am still using.
A friend of mine mentioned it.
I uninstalled it and then came back after a while 
Everything!
Maybe make it less laggy, but that can be the fault of my laptop.
The featured image. Not really my favourite, but I don’t have anything else worth showing!
It was all random without any technique! I used Pencil 2B and pencil texture, nothing more or less.
Have a nice day everyone and let the Krita grow 
Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.