I recently switched tasks from writing the ColBERT Live! library and related benchmarking tools to authoring BM25 search for Cassandra . I was able to implement the former almost entirely with "coding in English" via Aider . That is: I gave the LLM tasks, in English, and it generated diffs for me that Aider applied to my source files. This made me easily 5x more productive vs writing code by hand, even with AI autocomplete like Copilot. It felt amazing! (Take a minute to check out this short thread on a real-life session with Aider , if you've never tried it.) Coming back to Cassandra, by contrast, felt like swimming through molasses. Doing everything by hand is tedious when you know that an LLM could do it faster if you could just structure the problem correctly for it. It felt like writing assembly without a compiler -- a useful skill in narrow situations, but mostly not a good use of human intelligence today. The key difference in these two sce...
I spent way too long trying to figure out this problem with dlib while using the Python face_recognition library that wraps it, and since I couldn't find anyone giving the correct diagnosis and solution online, I'm posting it as a public service to the next person who hits it. Here's the error I was getting: RuntimeError: Error while calling cudaMallocHost(&data, new_size*sizeof(float)) in file /home/jonathan/Projects/dlib/dlib/cuda/gpu_data.cpp:211. code: 2, reason: out of memory Eventually I gave up and switched from the GPU model ("cnn") to the CPU one ("hog"). Then I started getting errors about RuntimeError: Unsupported image type, must be 8bit gray or RGB image. The errors persisted after adding PIL code to convert to RGB. This one was easier to track down on Google: it happens when you have numpy 2.x installed, which is not compatible with dlib. Seems like something along the way should give a warning about that! At any rate, with numpy dow...