DeepFindr
DeepFindr
  • 57
  • 1 508 209
Uniform Manifold Approximation and Projection (UMAP) | Dimensionality Reduction Techniques (5/5)
▬▬ Papers / Resources ▬▬▬
Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing
Sources:
- TDA Introduction: www.frontiersin.org/articles/10.3389/frai.2021.667963/full
- TDA Blogpost: chance.amstat.org/2021/04/topological-data-analysis/
- TDA Applications Blogpost: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
- TDA Intro Paper: arxiv.org/pdf/2006.03173.pdf
- Mathematical UMAP Blogpost: topos.site/blog/2024-04-05-understanding-umap/
- UMAP Author Talk: ua-cam.com/video/nq6iPZVUxZU/v-deo.html&ab_channel=Enthought
- UMAP vs. t-SNE Global preservation paper: dkobak.github.io/pdfs/kobak2021initialization.pdf
- Fuzzy Topology Slidedeck: speakerdeck.com/lmcinnes/umap-uniform-manifold-approximation-and-projection-for-dimension-reduction?slide=39
- Short UMAP Tutorial: jyopari.github.io/umap.html
Image Sources:
- Thumbnail Image: johncarlosbaez.wordpress.com/2020/02/10/the-category-theory-behind-umap/
- Persistent Homology: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
▬▬ Support me if you like 🌟
►Link to this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►E-Mail: deepfindr@gmail.com
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from #Uppbeat (free for Creators!):
uppbeat.io/t/sulyya/weather-compass
License code: ZRGIWRHMLMZMAHQI
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:32 Local vs. Global Technqiues
1:25 Is UMAP better?
02:08 The Paper
02:40 Topological Data Analysis Primer
04:04 Simplices
05:04 Filtration
06:22 Persistent Homology
07:02 UMAP Overview
07:40 Step 1: Graph construction
08:25 Uniform distribution
09:44 Non-uniform real-world data
10:48 Enforcing uniformity
12:05 Exponential decay
12:43 Local connectivity constraint
14:24 Distance function
16:19 Local metric spaces
17:00 Fuzzy simplicial complex
18:38 The full picture of step 1
19:10 Step 2: Graph layout optimization
19:55 Comparing graphs
21:15 Cross entropy loss
22:14 Attractive and repulsive forces
22:56 More details
24:04 Code
26:28 t-SNE vs. UMAP
27:24 Outro
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2
Переглядів: 1 638

Відео

t-distributed Stochastic Neighbor Embedding (t-SNE) | Dimensionality Reduction Techniques (4/5)
Переглядів 3,5 тис.4 місяці тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr. The first 200 of you will get 20% off Brilliant’s annual premium subscription. (Video sponsered by Brilliant.org) ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Entropy: gregorygundersen.com/blog/2020/09/01/gaussian-entropy/ At...
Multidimensional Scaling (MDS) | Dimensionality Reduction Techniques (3/5)
Переглядів 3 тис.5 місяців тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr . The first 200 of you will get 20% off Brilliant’s annual premium subscription ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Kruskal Paper 1964: cda.psych.uiuc.edu/psychometrika_highly_cited_articles/kruskal_1964a.pdf Very old...
Principal Component Analysis (PCA) | Dimensionality Reduction Techniques (2/5)
Переглядів 3,8 тис.6 місяців тому
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Peter Bloem PCA Blog: peterbloem.nl/blog/pca PCA for DS book: pca4ds.github.io/basic.html PCA Book: cda.psych.uiuc.edu/statistical_learning_course/Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)_MVsa_.pdf Lagrange Multipliers: ekamperi.github.io/mathemati...
Dimensionality Reduction Techniques | Introduction and Manifold Learning (1/5)
Переглядів 9 тис.7 місяців тому
Brilliant 20% off: brilliant.org/DeepFindr/ ▬▬ Papers / Resources ▬▬▬ Intro to Dim. Reduction Paper: drops.dagstuhl.de/opus/volltexte/2012/3747/pdf/12.pdf T-SNE Visualization Video: ua-cam.com/video/wvsE8jm1GzE/v-deo.html&ab_channel=GoogleforDevelopers On the Surprising Behavior of Distance Metrics in High Dimensional Space: link.springer.com/chapter/10.1007/3-540-44503-X_27 On the Intrinsic Di...
LoRA explained (and a bit about precision and quantization)
Переглядів 48 тис.10 місяців тому
▬▬ Papers / Resources ▬▬▬ LoRA Paper: arxiv.org/abs/2106.09685 QLoRA Paper: arxiv.org/abs/2305.14314 Huggingface 8bit intro: huggingface.co/blog/hf-bitsandbytes-integration PEFT / LoRA Tutorial: www.philschmid.de/fine-tune-flan-t5-peft Adapter Layers: arxiv.org/pdf/1902.00751.pdf Prefix Tuning: arxiv.org/abs/2101.00190 ▬▬ Support me if you like 🌟 ►Link to this channel: bit.ly/3zEqL1W ►Support m...
Vision Transformer Quick Guide - Theory and Code in (almost) 15 min
Переглядів 60 тис.11 місяців тому
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1P9TPRWsDdqJC6IvOxjG2_3QlgCt59P0w?usp=sharing ViT paper: arxiv.org/abs/2010.11929 Best Transformer intro: jalammar.github.io/illustrated-transformer/ CNNs vs ViT: arxiv.org/abs/2108.08810 CNNs vs ViT Blog: towardsdatascience.com/do-vision-transformers-see-like-convolutional-neural-networks-paper-explained-91b4bd5185c8 Swi...
Personalized Image Generation (using Dreambooth) explained!
Переглядів 7 тис.Рік тому
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1QUjLK6oUB_F4FsIDYusaHx-Yl7mL-Lae?usp=sharing Stable Diffusion Tutorial: jalammar.github.io/illustrated-stable-diffusion/ Stable Diffusion Paper: arxiv.org/abs/2112.10752 Hypernet Blogpost: blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac Dreambooth Paper: arxiv.org/abs/2208.12242 LoRa Paper: arxiv.o...
Equivariant Neural Networks | Part 3/3 - Transformers and GNNs
Переглядів 5 тис.Рік тому
▬▬ Papers / Resources ▬▬▬ SchNet: arxiv.org/abs/1706.08566 SE(3) Transformer: arxiv.org/abs/2006.10503 Tensor Field Network: arxiv.org/abs/1802.08219 Spherical Harmonics UA-cam Video: ua-cam.com/video/EcKgJhFdtEY/v-deo.html&ab_channel=BJBodner Spherical Harmonics Formula: ua-cam.com/video/5PMqf3Hj-Aw/v-deo.html&ab_channel=ProfessorMdoesScience Tensor Field Network Jupyter Notebook: github.com/U...
Equivariant Neural Networks | Part 2/3 - Generalized CNNs
Переглядів 4,8 тис.Рік тому
▬▬ Papers / Resources ▬▬▬ Group Equivariant CNNs: arxiv.org/abs/1602.07576 Convolution 3B1B video: ua-cam.com/video/KuXjwB4LzSA/v-deo.html&ab_channel=3Blue1Brown Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Steerable CNNs: arxiv.org/abs/1612.08498 Blogpost GCNN: medium.com/swlh/geometric-deep-learning-group-equivariant-convolutional-networks-ec687c7a7b41 Roto-Translation...
Equivariant Neural Networks | Part 1/3 - Introduction
Переглядів 10 тис.Рік тому
▬▬ Papers / Resources ▬▬▬ Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Deep Learning for Molecules: dmol.pub/dl/Equivariant.html Naturally Occuring Equivariance: distill.pub/2020/circuits/equivariance/ 3Blue1Brown Group Theory: ua-cam.com/video/mH0oCDa74tE/v-deo.html&ab_channel=3Blue1Brown Group Equivariant CNNs: arxiv.org/abs/1602.07576 Equivariance vs Data Augmentation...
State of AI 2022 - My Highlights
Переглядів 2,8 тис.Рік тому
▬▬ Sources ▬▬▬▬▬▬▬ - State of AI Report 2022: www.stateof.ai/ ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Icons are from flaticon: www.flaticon.com/authors/freepik ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬ Music from Uppbeat (free for Creators!): uppbeat.io/t/sensho/forgiveness License code: AG34GTPX2CW8CTHS ▬▬ Used Videos ▬▬▬▬▬▬▬▬▬▬▬ Byron Bhxr: www.pexels.com/de-de/video/wissenschaft-animation-dna-biochemie-11268031/ ▬▬ Ti...
Contrastive Learning in PyTorch - Part 2: CL on Point Clouds
Переглядів 16 тис.Рік тому
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Colab Notebook: colab.research.google.com/drive/1oO-Raqge8oGXGNkZQOYTH-je4Xi1SFVI?usp=sharing - SimCLRv2: arxiv.org/pdf/2006.10029.pdf - PointNet: arxiv.org/pdf/1612.00593.pdf - PointNet : arxiv.org/pdf/1706.02413.pdf - EdgeConv: arxiv.org/pdf/1801.07829.pdf - Contrastive Learning Survey: arxiv.org/ftp/arxiv/papers/2010/2010.05113.pdf ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Ico...
Contrastive Learning in PyTorch - Part 1: Introduction
Переглядів 28 тис.Рік тому
▬▬ Notes ▬▬▬▬▬▬▬▬▬▬▬ Two small things I realized when editing this video - SimCLR uses two separate augmented views as positive samples - Many frameworks have separate projection heads on the learned representations which transforms them additionally for the contrastive loss ▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Intro: sthalles.github.io/a-few-words-on-representation-learning/ - Survey: arxiv.org/ftp/arx...
Self-/Unsupervised GNN Training
Переглядів 16 тис.Рік тому
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Molecular Pre-Training Evaluation: arxiv.org/pdf/2207.06010.pdf - Latent Space Image: arxiv.org/pdf/2206.08005.pdf - Survey Xie et al.: arxiv.org/pdf/2102.10757.pdf - Survey Liu et al.: arxiv.org/pdf/2103.00111.pdf - Graph Autoencoder, Kipf/Welling: arxiv.org/pdf/1611.07308.pdf - GraphCL: arxiv.org/pdf/2010.13902.pdf - Deep Graph Infomax: arxiv.org/pdf/1809.10341.pdf...
Diffusion models from scratch in PyTorch
Переглядів 233 тис.Рік тому
Diffusion models from scratch in PyTorch
Causality and (Graph) Neural Networks
Переглядів 16 тис.2 роки тому
Causality and (Graph) Neural Networks
How to get started with Data Science (Career tracks and advice)
Переглядів 1,6 тис.2 роки тому
How to get started with Data Science (Career tracks and advice)
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
Переглядів 11 тис.2 роки тому
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
Converting a Tabular Dataset to a Graph Dataset for GNNs
Переглядів 30 тис.2 роки тому
Converting a Tabular Dataset to a Graph Dataset for GNNs
How to handle Uncertainty in Deep Learning #2.2
Переглядів 3 тис.2 роки тому
How to handle Uncertainty in Deep Learning #2.2
How to handle Uncertainty in Deep Learning #2.1
Переглядів 5 тис.2 роки тому
How to handle Uncertainty in Deep Learning #2.1
How to handle Uncertainty in Deep Learning #1.2
Переглядів 3,7 тис.2 роки тому
How to handle Uncertainty in Deep Learning #1.2
How to handle Uncertainty in Deep Learning #1.1
Переглядів 11 тис.2 роки тому
How to handle Uncertainty in Deep Learning #1.1
Recommender Systems using Graph Neural Networks
Переглядів 21 тис.2 роки тому
Recommender Systems using Graph Neural Networks
Fake News Detection using Graphs with Pytorch Geometric
Переглядів 14 тис.2 роки тому
Fake News Detection using Graphs with Pytorch Geometric
Fraud Detection with Graph Neural Networks
Переглядів 25 тис.2 роки тому
Fraud Detection with Graph Neural Networks
Traffic Forecasting with Pytorch Geometric Temporal
Переглядів 22 тис.2 роки тому
Traffic Forecasting with Pytorch Geometric Temporal
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
Переглядів 27 тис.2 роки тому
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
Python Graph Neural Network Libraries (an Overview)
Переглядів 8 тис.2 роки тому
Python Graph Neural Network Libraries (an Overview)

КОМЕНТАРІ

  • @datacuriosity
    @datacuriosity 2 години тому

    stellargraph is another good library too

  • @SambitTripathy
    @SambitTripathy 5 годин тому

    After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?

  • @keremkosif1348
    @keremkosif1348 5 годин тому

    thanks

  • @adosar7261
    @adosar7261 День тому

    Isn't the embedding layer redundant? I mean we have then the projection matrices meaning that embedding + projection is a composition of two linear layers.

  • @lorenzoneri-co5hj
    @lorenzoneri-co5hj 7 днів тому

    (rome is bigger than nyc)

    • @DeepFindr
      @DeepFindr 7 днів тому

      When it comes to area probably yes :P but not citizens wise

  • @betabias
    @betabias 9 днів тому

    Keep making content like this, I am sure you will get a very good recognition in the future. Thanks for such amazing content.

  • @deadliftform4920
    @deadliftform4920 9 днів тому

    best

  • @alivecoding4995
    @alivecoding4995 10 днів тому

    A great video as usual. Very detailed and comprehensive 😊. Only one thing left me confused: Why isn’t it a problem to making use of Euclidian distance in t-SNE and Umap? You could have assumed they skip it completely.

  • @imadOualid
    @imadOualid 12 днів тому

    thank u a lot for all ur videos :D can u do one about graphsage ?

  • @deadliftform4920
    @deadliftform4920 12 днів тому

    best explanation present out there, i would love to connect with you to gain some knowledge about the ML/AI journey i wanna go through, please tell me how can i connect with you?

  • @andreiguzovski7774
    @andreiguzovski7774 13 днів тому

    Must-watch series for any CS major.

  • @henk_iii
    @henk_iii 14 днів тому

    Thank you for the excellent presentation of the topic, a job well done!

  • @ngwoonyee8001
    @ngwoonyee8001 14 днів тому

    thank you so much for the splendid explanation!

  • @veerasaidurga8502
    @veerasaidurga8502 15 днів тому

    I have worst experience in your channel deep learning course videos are very less and your channel has worst audio clarity

    • @DeepFindr
      @DeepFindr 15 днів тому

      Pro tip: don't watch the channel :)

  • @veerasaidurga8502
    @veerasaidurga8502 15 днів тому

    This is the worst voice clarity i have experienced in the UA-cam and video editing you have to improve how can people understand

    • @DeepFindr
      @DeepFindr 15 днів тому

      Lol u must be really unhappy in life :D

  • @veerasaidurga8502
    @veerasaidurga8502 15 днів тому

    Your voice clarity is toooo worst

    • @DeepFindr
      @DeepFindr 15 днів тому

      Always make sure that insults are grammatically correct!

    • @dennislinnert5476
      @dennislinnert5476 15 днів тому

      @veerasaidurga8502 Who hurt you brother? Get some help, maybe going outside of the basement would help ;) or its just the hyderabad people

  • @edinjelacic2132
    @edinjelacic2132 15 днів тому

    Awesome presentation, you should really be proud of this series! Learned quite a lot, and also learned how to transfer knowledge effectively. All the best!

  • @hannahnelson4569
    @hannahnelson4569 15 днів тому

    Thank you. I think I understand now.

  • @zacklee5787
    @zacklee5787 16 днів тому

    I have come to understand attention as key, query, value multiplication/addition. Do you know why this wasn't used and if it's appropriate to call it attention?

    • @DeepFindr
      @DeepFindr 16 днів тому

      Hi, Query / Key / Value are just a design choice of the transformer model. Attention is another technique of the architecture. There is also a GNN Transformer (look for Graphormer) that follows the query/key/value pattern. The attention mechanism is detached from this concept and is simply a way to learn importance between embeddings.

  • @kevon217
    @kevon217 17 днів тому

    Excellent overview. Appreciate it!

  • @gayanpathirage7675
    @gayanpathirage7675 17 днів тому

    There was a error on your published code but not in the video. attn_output, attn_output_weights = self.att(x, x, x) It should be attn_output, attn_output_weights = self.att(q, k, v) Anyway, thanks for sharing the video and code base. It helped me a lot while learning ViT

  • @RamaRaoBandreddi-go1et
    @RamaRaoBandreddi-go1et 19 днів тому

    Great video. Amazing stuff. I have a query. In this use case, it is assumed the distance-based calculations formulate the edge index, and hence, it is constant. How we should proceed if the edges/edge indexes change for every time snapshot.

  • @schoenwettersl
    @schoenwettersl 22 дні тому

    You are amazing at explaining. Congratulations at having done this so incredibly well

  • @Gustavo-nn7zc
    @Gustavo-nn7zc 22 дні тому

    Hi, great video, thanks! Is there a way to use SHAP for ARIMA/SARIMA?

  • @smitshah6665
    @smitshah6665 24 дні тому

    your content and explanation is Incredibly helpful. Thank you

  • @josephmargaryan
    @josephmargaryan 29 днів тому

    Is this better for the MNIST challenge compared to a simple conv network like LeNet

  • @joemeyer9655
    @joemeyer9655 29 днів тому

    Awesome!

  • @user-in2dd6by9q
    @user-in2dd6by9q Місяць тому

    great video to explain lora! thanks

  • @farisnurhafiz7832
    @farisnurhafiz7832 Місяць тому

    Is it okay to not scale the numerical data? Can we just proceed with the analysis as is?

  • @efexzium
    @efexzium Місяць тому

    can you please make a video on how to perform inference on VIT like googles open source vision transformer?

  • @rafa_br34
    @rafa_br34 Місяць тому

    Great video! For me, the code makes it easier to understand the math than the actual formulas, so videos like these really help.

  • @mathurnil4616
    @mathurnil4616 Місяць тому

    great video....

  • @emindurmus993
    @emindurmus993 Місяць тому

    this is really amazing content but there is a problem on colab this code is not work anymore

  • @darwins-dawn
    @darwins-dawn Місяць тому

    great video , thanks for the work! here is a question from a complete beginner of mlflow and deployment: do I need 3 different machines to run the servers seperately ? Thanks!

    • @DeepFindr
      @DeepFindr Місяць тому

      Hi, nope you just need 3 terminals / tabs in the terminal :) the different servers will run on different ports of the same machine

    • @darwins-dawn
      @darwins-dawn Місяць тому

      @@DeepFindr i think i get the idea. Appreciate it. 8)

  • @bayesian7404
    @bayesian7404 Місяць тому

    Great talk. It’s very clearly explained and well presented.

  • @flecart
    @flecart Місяць тому

    good job!

  • @MuhammadAbdullah-wr3nh
    @MuhammadAbdullah-wr3nh Місяць тому

    For the line "from torch_geometric_temporal.dataset import METRLADatasetLoader" I am getting this error, "ModuleNotFoundError Traceback (most recent call last) <ipython-input-5-ab694df90048> in <cell line: 2>() 1 import numpy as np ----> 2 from torch_geometric_temporal.dataset import METRLADatasetLoader 3 from torch_geometric_temporal.signal import StaticGraphTemporalSignal 4 5 loader = METRLADatasetLoader() 3 frames /usr/local/lib/python3.10/dist-packages/torch_geometric_temporal/nn/attention/tsagcn.py in <module> 4 import torch.nn as nn 5 from torch.autograd import Variable ----> 6 from torch_geometric.utils.to_dense_adj import to_dense_adj 7 import torch.nn.functional as F 8 ModuleNotFoundError: No module named 'torch_geometric.utils.to_dense_adj'" Can you kindly guide that what could be the issue?

  • @Wilkshot
    @Wilkshot Місяць тому

    Excellent.

  • @beyond_infinity16
    @beyond_infinity16 Місяць тому

    Explained quite well !

  • @JessSightler
    @JessSightler Місяць тому

    I've changed the output layer a bit... this: self.head_ln = nn.LayerNorm(emb_dim) self.head = nn.Sequential(nn.Linear(int((1 + self.height/self.patch_size * self.width/self.patch_size) * emb_dim), out_dim)) Then in forward: x = x.view(x.shape[0], int((1 + self.height/self.patch_size * self.width/self.patch_size) * x.shape[-1])) out = self.head(x) The downside is that you'll likely get a lot more overfitting, but without it the network was not really training at all.

    • @DeepFindr
      @DeepFindr Місяць тому

      Hi, thanks for your recommendation. I would probably not use this model for real world data as there are many important details that are missing (for the sake of providing a simple overview). I will pin your comment for others that also want to use this implementation. Thank you!

  • @LibertyEater
    @LibertyEater Місяць тому

    BEST DIMENSIONAL REDUCTION VIDEO SERIES EVER! You are the 3blue1brown for data mining.

    • @DeepFindr
      @DeepFindr Місяць тому

      Thanks for the nice words!

  • @vero811
    @vero811 Місяць тому

    I think there is a confusion between cls token and positional embedding? At 6:09?

  • @SuperMaker.M
    @SuperMaker.M Місяць тому

    Finally a channel with good content!

  • @abhinavvura4973
    @abhinavvura4973 Місяць тому

    hi there I have used the code for binary class classification, but encountering the problem on accuracy , showing 100% accuracy only on label 1 and some times on label 2. So it would be helpful for me if u provide me any solution

    • @DeepFindr
      @DeepFindr Місяць тому

      Hi, please see pinned comment. Maybe this helps :)

  • @divelix2666
    @divelix2666 Місяць тому

    Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!

    • @DeepFindr
      @DeepFindr Місяць тому

      Glad it was useful! :)

  • @muhammadtariq7474
    @muhammadtariq7474 Місяць тому

    Where to get slides? Used in video

  • @ArunkumarMTamil
    @ArunkumarMTamil Місяць тому

    how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?

  • @misraimburgos7461
    @misraimburgos7461 Місяць тому

    This channel is a gift

  • @kutilkol
    @kutilkol Місяць тому

    Ideot read paper. Lol

  • @brunotagna4220
    @brunotagna4220 Місяць тому

    Your videos are great, super high quality and clear explanations. Thanks you so much!