Skip to main content

Posts

How LLMs actually works?

Have you ever chatted with an AI and felt a shiver of awe? Like it wasn't just following rules, but actually getting what you were saying? Or watched as a simple prompt turned into a fully formed story, complete with characters and a plot? It's easy to think of these large language models, or LLMs, as pure magic, but the reality is even cooler—it's a symphony of mathematics and data that's both mind-bending and surprisingly elegant. Let's pull back the curtain and peek inside the "brain" of one of these digital wizards. It's a bit like learning how a master painter works: you see the finished canvas, but the real genius is in the strokes, the colors, and the technique. Step 1: Turning Words into a Language the AI Understands A computer doesn't see "cat" or "house" like we do. It sees numbers. So, the first thing an LLM does is translate our words into its own numerical language. Breaking It Down: Your sentence, "The dog c...
Recent posts

Maxpool

 This layer is used to downsample the shape of an image Suppose you have an image- [ 2 3 4 5   3 4 5 6   4 5 6 7 ] if pool size is (2,2) then output will [ 4 6   5 7 ]

Convolution layer

This is the main layer of feature extraction part of the convolution neural networks. Convolution layer They had a filter of shape (x,x), generally x=3 so shape of filter is (3,3). Kernel 2 2 1 3 3 3 3 2 1 Image 2 3 5 3 3 3  (2*2)+(3*2)+(5*1)+(3*3)+(3*3)+(3*3)+(3*3)+(2*2)+(1*1)=56 3 2 1 Every pixel will change by 56 of image

CNN(Convolution Neural Network)

 CNN(Convolution Neural Network) CNN is used to classify images CNN is most effective for 2D data CNN has three layers-: 1> Convolution layer for extracting feature 2> Max Pooling layer for reducing dimension 3> Flatten for Convert 2D to 1D because we have to pass to ANN prediction 4> Next ANN Network