10 ๋ถ„ ์†Œ์š”

Core Concepts and Terminology

1. Perception and Processing (์ง€๊ฐ๊ณผ ์ฒ˜๋ฆฌ)

  • Perceive (๋™์‚ฌ): ์ธ์ง€ํ•˜๋‹ค, ๊ฐ์ง€ํ•˜๋‹ค
    • Example: Computers can perceive subtle differences in images that humans might miss.
  • Perceptual (ํ˜•์šฉ์‚ฌ): ์ง€๊ฐ์˜, ์ธ์ง€์˜
    • Example: Perceptual computing aims to mimic human sensory processing.
  • Misperception (๋ช…์‚ฌ): ์˜ค์ธ, ์ž˜๋ชป๋œ ์ธ์‹
    • Example: Image artifacts can lead to misperception in computer vision systems.

2. Visual Properties (์‹œ๊ฐ์  ํŠน์„ฑ)

  • Translucency (๋ช…์‚ฌ): ๋ฐ˜ํˆฌ๋ช…์„ฑ
    • Example: Processing translucency in images requires advanced rendering techniques.
  • Subtle (ํ˜•์šฉ์‚ฌ): ๋ฏธ๋ฌ˜ํ•œ, ์„ฌ์„ธํ•œ
    • Example: The algorithm can detect subtle changes in lighting conditions.
  • Shading (๋ช…์‚ฌ): ์Œ์˜
    • Example: Proper shading analysis helps in understanding 3D structures.

3. Image Analysis (์ด๋ฏธ์ง€ ๋ถ„์„)

  • Effortlessly (๋ถ€์‚ฌ): ์ˆ˜์›”ํ•˜๊ฒŒ, ์–ด๋ ค์›€ ์—†์ด
    • Example: Modern AI systems effortlessly process thousands of images per second.
  • Portrait (๋ช…์‚ฌ): ์ดˆ์ƒ, ์ธ๋ฌผ ์‚ฌ์ง„
    • Example: Face detection algorithms are crucial for portrait photography.
  • Rapid (ํ˜•์šฉ์‚ฌ): ๋น ๋ฅธ, ์‹ ์†ํ•œ
    • Example: Rapid image processing is essential for real-time applications.

4. Technical Processes (๊ธฐ์ˆ ์  ๊ณผ์ •)

  • Delineate (๋™์‚ฌ): ์œค๊ณฝ์„ ๊ทธ๋ฆฌ๋‹ค, ๋ฌ˜์‚ฌํ•˜๋‹ค
    • Example: The algorithm can delineate object boundaries precisely.
  • Causality (๋ช…์‚ฌ): ์ธ๊ณผ๊ด€๊ณ„
    • Example: Understanding causality in scene analysis improves prediction accuracy.
  • Radiometry (๋ช…์‚ฌ): ๋ฐฉ์‚ฌ์ธก์ •
    • Example: Radiometry is fundamental to understanding light behavior in images.

5. Industrial Applications (์‚ฐ์—…์  ์‘์šฉ)

  • Inspection (๋ช…์‚ฌ): ๊ฒ€์‚ฌ, ์ ๊ฒ€
    • Example: Automated visual inspection systems in manufacturing.
  • Retail (๋ช…์‚ฌ): ์†Œ๋งค, ์œ ํ†ต
    • Example: Computer vision enhances retail analytics and customer behavior tracking.
  • Warehouse (๋ช…์‚ฌ): ์ฐฝ๊ณ 
    • Example: Automated warehouse systems use computer vision for inventory management.
  • Logistics (๋ช…์‚ฌ): ๋ฌผ๋ฅ˜
    • Example: Computer vision optimizes logistics operations through package tracking.

6. Advanced Applications (๊ณ ๊ธ‰ ์‘์šฉ)

  • Intra-operative (ํ˜•์šฉ์‚ฌ): ์ˆ˜์ˆ  ์ค‘์˜
    • Example: Intra-operative imaging assists surgeons during procedures.
  • Morphology (๋ช…์‚ฌ): ํ˜•ํƒœํ•™
    • Example: Mathematical morphology is used in image processing.
  • Surveillance (๋ช…์‚ฌ): ๊ฐ์‹œ, ๋ณด์•ˆ
    • Example: Video surveillance systems employ advanced computer vision techniques.

7. Image Processing Techniques (์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๊ธฐ๋ฒ•)

  • Stitching (๋ช…์‚ฌ): ์ด๋ฏธ์ง€ ์ ‘ํ•ฉ
    • Example: Panorama creation through image stitching.
  • Overlapping (ํ˜•์šฉ์‚ฌ): ์ค‘์ฒฉ๋˜๋Š”
    • Example: Overlapping images are used in 3D reconstruction.
  • Bracketing (๋ช…์‚ฌ): ๋ธŒ๋ผ์ผ€ํŒ…(๋…ธ์ถœ ๋‹จ๊ณ„ ์กฐ์ ˆ)
    • Example: HDR images are created using exposure bracketing.
  • Morphing (๋ช…์‚ฌ): ๋ณ€ํ˜•
    • Example: Face morphing creates smooth transitions between images.

8. Research and Development (์—ฐ๊ตฌ ๊ฐœ๋ฐœ)

  • Meticulously (๋ถ€์‚ฌ): ๊ผผ๊ผผํ•˜๊ฒŒ, ์ •๊ตํ•˜๊ฒŒ
    • Example: AI models must be meticulously trained for optimal performance.
  • Layer-wise, sample-wise (ํ˜•์šฉ์‚ฌ): ์ธต๋ณ„, ์ƒ˜ํ”Œ๋ณ„
    • Example: Neural networks are analyzed layer-wise for better understanding.
  • Preexisting (ํ˜•์šฉ์‚ฌ): ๊ธฐ์กด์˜, ์ด๋ฏธ ์กด์žฌํ•˜๋Š”
    • Example: Preexisting models can be fine-tuned for specific tasks.
  • Plausible (ํ˜•์šฉ์‚ฌ): ๊ทธ๋Ÿด๋“ฏํ•œ, ํƒ€๋‹นํ•œ
    • Example: The system generates plausible reconstructions of 3D scenes.

9. Industrial Human-Robot Collaboration (์‚ฐ์—… ํ˜„์žฅ ์ธ๊ฐ„-๋กœ๋ด‡ ํ˜‘์—…)

  • Pose Forecasting (๋ช…์‚ฌ๊ตฌ): ์ž์„ธ ์˜ˆ์ธก
    • Example: Pose forecasting is crucial for safe human-robot interaction.
  • Ablation (๋ช…์‚ฌ): (์‹ ๊ฒฝ๋ง ๋“ฑ์—์„œ) ์ผ๋ถ€ ์ œ๊ฑฐ, ์ ˆ์ œ
    • Example: Ablation studies help understand the contribution of different model components.
  • Endow (๋™์‚ฌ): ๋ถ€์—ฌํ•˜๋‹ค, ์ฃผ๋‹ค
    • Example: The new sensor endows the robot with better perceptive capabilities.
  • Perceptive (ํ˜•์šฉ์‚ฌ): ์ง€๊ฐํ•˜๋Š”, ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š”
    • Example: Perceptive robots can adapt to dynamic environments.
  • Harm (๋ช…์‚ฌ/๋™์‚ฌ): ํ•ด, ์†์ƒ / ํ•ด๋ฅผ ๋ผ์น˜๋‹ค
    • Example: The primary goal is to prevent any harm to the human worker.
  • Transmission of contact wrenches (๋ช…์‚ฌ๊ตฌ): ์ ‘์ด‰๋ ฅ ์ „๋‹ฌ
    • Example: Accurate transmission of contact wrenches is vital for haptic feedback.
  • On-the-fly (๋ถ€์‚ฌ๊ตฌ): ์ฆ‰์„์—์„œ, ์‹ค์‹œ๊ฐ„์œผ๋กœ
    • Example: The robot can adjust its path on-the-fly to avoid obstacles.
  • Adjacency (๋ช…์‚ฌ): ์ธ์ ‘, ๊ทผ์ ‘
    • Example: Adjacency matrices are used to represent connections in a graph.
  • Sparse (ํ˜•์šฉ์‚ฌ): ํฌ์†Œํ•œ, ๋“œ๋ฌธ๋“œ๋ฌธ ์žˆ๋Š”
    • Example: Sparse data can be challenging for some machine learning models.
  • Sparsity (๋ช…์‚ฌ): ํฌ์†Œ์„ฑ
    • Example: Sparsity is a desirable property in many high-dimensional datasets.
  • Abide (๋™์‚ฌ): (๊ทœ์น™ ๋“ฑ์„) ์ค€์ˆ˜ํ•˜๋‹ค, ์ง€ํ‚ค๋‹ค
    • Example: Robots must abide by safety protocols.
  • Genuine (ํ˜•์šฉ์‚ฌ): ์ง„์งœ์˜, ์ง„์ •ํ•œ
    • Example: The system aims to achieve genuine collaboration between humans and robots.
  • Foresee (๋™์‚ฌ): ์˜ˆ๊ฒฌํ•˜๋‹ค, ์˜ˆ์ธกํ•˜๋‹ค
    • Example: It is difficult to foresee all possible scenarios in a complex environment.
  • Exploited (๋™์‚ฌ, ์ˆ˜๋™ํƒœ ๋˜๋Š” ๊ณผ๊ฑฐํ˜•): ํ™œ์šฉ๋œ, ์ด์šฉ๋œ
    • Example: The robotโ€™s capabilities were fully exploited in the task.
  • Aspect (๋ช…์‚ฌ): ์ธก๋ฉด, ์–‘์ƒ
    • Example: Safety is a critical aspect of human-robot collaboration.
  • Intersection (๋ช…์‚ฌ): ๊ต์ฐจ์ , ๊ต์ง‘ํ•ฉ
    • Example: The intersection of AI and robotics has led to many innovations.
  • Prune (๋™์‚ฌ): (๊ฐ€์ง€ ๋“ฑ์„) ์น˜๋‹ค, ์ œ๊ฑฐํ•˜๋‹ค, ๊ฐ„๊ฒฐํ•˜๊ฒŒ ํ•˜๋‹ค
    • Example: Pruning the neural network can reduce its complexity.
  • Factorize (๋™์‚ฌ): ์ธ์ˆ˜๋ถ„ํ•ดํ•˜๋‹ค, ์š”์ธ์œผ๋กœ ๋ถ„์„ํ•˜๋‹ค
    • Example: We can factorize the matrix to understand its underlying structure.
  • Adjacency (๋ช…์‚ฌ): ์ธ์ ‘, ๊ทผ์ ‘ (์ค‘๋ณต๋œ ๋‹จ์–ด์ž…๋‹ˆ๋‹ค. ํ•„์š”์‹œ ํ•˜๋‚˜๋ฅผ ์‚ญ์ œํ•˜๊ฑฐ๋‚˜ ๋‹ค๋ฅธ ์˜๋ฏธ๋กœ ์‚ฌ์šฉ๋œ ๊ฒฝ์šฐ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•ด์ฃผ์„ธ์š”.)
    • Example: Adjacency lists are another way to represent graph connections.
  • Spectral (ํ˜•์šฉ์‚ฌ): ์ŠคํŽ™ํŠธ๋Ÿผ์˜, ๋ถ„๊ด‘์˜
    • Example: Spectral analysis is used in various sensor technologies.
  • Coarse (ํ˜•์šฉ์‚ฌ): ๊ฑฐ์นœ, ๋Œ€๋žต์ ์ธ
    • Example: A coarse estimation was made initially, followed by a finer adjustment.
  • Sparsifying (๋™์‚ฌ/ํ˜•์šฉ์‚ฌ): ํฌ์†Œํ™”ํ•˜๋Š”
    • Example: Sparsifying techniques are used to reduce data dimensionality.

10. Modern Artificial Intelligence (ํ˜„๋Œ€ ์ธ๊ณต์ง€๋Šฅ)

  • Formalize (๋™์‚ฌ): ๊ณต์‹ํ™”ํ•˜๋‹ค, ํ˜•์‹ํ™”ํ•˜๋‹ค
    • Example: We need to formalize the problem statement before designing a solution.
  • Density (๋ช…์‚ฌ): ๋ฐ€๋„
    • Example: Probability density functions describe the likelihood of a continuous variable.
  • PDF (Probability Density Function) (๋ช…์‚ฌ): ํ™•๋ฅ  ๋ฐ€๋„ ํ•จ์ˆ˜
    • Example: The PDF of a normal distribution is bell-shaped.
  • PMF (Probability Mass Function) (๋ช…์‚ฌ): ํ™•๋ฅ  ์งˆ๋Ÿ‰ ํ•จ์ˆ˜
    • Example: The PMF is used for discrete random variables.
  • Accomplish (๋™์‚ฌ): ์„ฑ์ทจํ•˜๋‹ค, ๋‹ฌ์„ฑํ•˜๋‹ค
    • Example: The AI model was able to accomplish the task with high accuracy.
  • Observed (ํ˜•์šฉ์‚ฌ/๋™์‚ฌ ๊ณผ๊ฑฐํ˜•): ๊ด€์ฐฐ๋œ
    • Example: The observed data was used to train the model.
  • Intractable (ํ˜•์šฉ์‚ฌ): ๋‹ค๋ฃจ๊ธฐ ํž˜๋“ , ์ฒ˜๋ฆฌํ•˜๊ธฐ ์–ด๋ ค์šด
    • Example: Some problems are computationally intractable.
  • Coefficient (๋ช…์‚ฌ): ๊ณ„์ˆ˜
    • Example: The coefficients of the linear regression model were estimated.
  • Era (๋ช…์‚ฌ): ์‹œ๋Œ€
    • Example: We are in the era of big data and artificial intelligence.
  • Deluge (๋ช…์‚ฌ): ํญ์ฃผ, ์‡„๋„, ํ™์ˆ˜
    • Example: Researchers face a deluge of data from various sources.
  • Apparently (๋ถ€์‚ฌ): ๋ช…๋ฐฑํžˆ, ๋ณด์•„ํ•˜๋‹ˆ
    • Example: Apparently, the new algorithm outperforms the old one.
  • Sufficient (ํ˜•์šฉ์‚ฌ): ์ถฉ๋ถ„ํ•œ
    • Example: Sufficient data is required to train a robust model.
  • Altering (๋™์‚ฌ/ํ˜•์šฉ์‚ฌ): ๋ณ€๊ฒฝํ•˜๋Š”
    • Example: Altering the hyperparameters can significantly affect model performance.
  • Ability to fix (๊ตฌ๋ฌธ): ~์„ ๊ณ ์น˜๋Š” ๋Šฅ๋ ฅ
    • Example: The system has the ability to fix minor errors automatically.
  • Devising (๋™์‚ฌ): ๊ณ ์•ˆํ•˜๋‹ค, ์ฐฝ์•ˆํ•˜๋‹ค
    • Example: Devising new algorithms is a key part of AI research.
  • Empirical (ํ˜•์šฉ์‚ฌ): ๊ฒฝํ—˜์ ์ธ, ์‹ค์ฆ์ ์ธ
    • Example: Empirical results show the effectiveness of the proposed method.
  • Theorem (๋ช…์‚ฌ): ์ •๋ฆฌ (์ˆ˜ํ•™, ๋…ผ๋ฆฌํ•™)
    • Example: The Bayesโ€™ theorem is fundamental in probability theory.
  • Criterion (๋ช…์‚ฌ): ๊ธฐ์ค€, ํ‘œ์ค€ (๋ณต์ˆ˜ํ˜•: criteria)
    • Example: Accuracy is a common criterion for evaluating classification models.
  • Impose (๋™์‚ฌ): ๋ถ€๊ณผํ•˜๋‹ค, ๊ฐ•์š”ํ•˜๋‹ค, ๋„์ž…ํ•˜๋‹ค
    • Example: We can impose constraints on the optimization problem.
  • Regularization (๋ช…์‚ฌ): ์ •๊ทœํ™” (๊ธฐ๊ณ„ ํ•™์Šต)
    • Example: Regularization helps prevent overfitting in machine learning models.
  • Implicitly (๋ถ€์‚ฌ): ์•”๋ฌต์ ์œผ๋กœ, ํ•จ์ถ•์ ์œผ๋กœ
    • Example: The model implicitly learns features from the data.
  • Explicitly (๋ถ€์‚ฌ): ๋ช…์‹œ์ ์œผ๋กœ, ๋ช…๋ฐฑํžˆ
    • Example: Some parameters need to be explicitly defined.
  • Preferences (๋ช…์‚ฌ): ์„ ํ˜ธ๋„
    • Example: The recommendation system learns user preferences over time.
  • Validation (๋ช…์‚ฌ): ๊ฒ€์ฆ, ํ™•์ธ
    • Example: Cross-validation is used to assess model performance.
  • Problematic (ํ˜•์šฉ์‚ฌ): ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š”, ๋ฌธ์ œ๊ฐ€ ๋งŽ์€
    • Example: Handling missing data can be problematic.
  • Associate (๋™์‚ฌ): ์—ฐ๊ด€์‹œํ‚ค๋‹ค, ๊ด€๋ จ์ง“๋‹ค
    • Example: The goal is to associate input features with output labels.
  • Evaluations (๋ช…์‚ฌ): ํ‰๊ฐ€
    • Example: Multiple evaluations were conducted to ensure robustness.
  • Converge (๋™์‚ฌ): ์ˆ˜๋ ดํ•˜๋‹ค, ํ•œ ์ ์— ๋ชจ์ด๋‹ค
    • Example: The optimization algorithm should converge to a good solution.
  • Convex (ํ˜•์šฉ์‚ฌ): ๋ณผ๋กํ•œ (์ˆ˜ํ•™)
    • Example: Convex optimization problems are generally easier to solve.

Writing Technical Papers (๋…ผ๋ฌธ ์ž‘์„ฑ)

  • Inventing (๋™์‚ฌ): ๋ฐœ๋ช…ํ•˜๋‹ค, ๊ณ ์•ˆํ•˜๋‹ค
    • Example: Inventing new algorithms for efficient image processing.
  • Revising (๋™์‚ฌ): ์ˆ˜์ •ํ•˜๋‹ค, ๊ต์ •ํ•˜๋‹ค
    • Example: Revising the methodology section of the research paper.
  • Drafting (๋™์‚ฌ): ์ดˆ์•ˆ์„ ์ž‘์„ฑํ•˜๋‹ค
    • Example: Drafting the experimental results section.

11. ์ˆ˜ํ•™/์„ ํ˜•๋Œ€์ˆ˜/ํ™•๋ฅ /๋จธ์‹ ๋Ÿฌ๋‹ ์šฉ์–ด ์ •๋ฆฌ

  • Vector: ํฌ๊ธฐ์™€ ๋ฐฉํ–ฅ์„ ๊ฐ€์ง€๋Š” ์ˆ˜ํ•™์  ๊ฐ์ฒด. ์ผ๋ฐ˜์ ์œผ๋กœ ์ˆซ์ž ๋ชฉ๋ก์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค.
  • Matrix: ํ–‰๊ณผ ์—ด๋กœ ๋ฐฐ์—ด๋œ ์ˆซ์ž์˜ ์ง์‚ฌ๊ฐํ˜• ๋ฐฐ์—ด.
  • Inner Product (Dot Product): ๋‘ ๋ฒกํ„ฐ๋ฅผ ๊ณฑํ•˜์—ฌ ์Šค์นผ๋ผ ๊ฐ’์„ ์–ป๋Š” ์—ฐ์‚ฐ. ๋ฒกํ„ฐ์˜ ์œ ์‚ฌ์„ฑ์„ ์ธก์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
  • Outer Product: ๋‘ ๋ฒกํ„ฐ๋ฅผ ๊ณฑํ•˜์—ฌ ํ–‰๋ ฌ์„ ์–ป๋Š” ์—ฐ์‚ฐ.
  • Matrix-Vector Product: ํ–‰๋ ฌ๊ณผ ๋ฒกํ„ฐ๋ฅผ ๊ณฑํ•˜์—ฌ ๋ฒกํ„ฐ๋ฅผ ์–ป๋Š” ์—ฐ์‚ฐ. ๋ฒกํ„ฐ๋ฅผ ํ–‰๋ ฌ์˜ ์—ด๋“ค์˜ ์„ ํ˜• ๊ฒฐํ•ฉ์œผ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  • Matrix-Matrix Product: ๋‘ ํ–‰๋ ฌ์„ ๊ณฑํ•˜์—ฌ ํ–‰๋ ฌ์„ ์–ป๋Š” ์—ฐ์‚ฐ.
  • Identity Matrix: ์ฃผ๋Œ€๊ฐ์„  ์š”์†Œ๊ฐ€ ๋ชจ๋‘ 1์ด๊ณ  ๋‚˜๋จธ์ง€ ์š”์†Œ๋Š” ๋ชจ๋‘ 0์ธ ์ •๋ฐฉ ํ–‰๋ ฌ. ํ–‰๋ ฌ ๊ณฑ์…ˆ์—์„œ ํ•ญ๋“ฑ์› ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค.
  • Diagonal Matrix: ์ฃผ๋Œ€๊ฐ์„  ์š”์†Œ ์™ธ์˜ ๋ชจ๋“  ์š”์†Œ๊ฐ€ 0์ธ ์ •๋ฐฉ ํ–‰๋ ฌ.
  • Transpose: ํ–‰๋ ฌ์˜ ํ–‰๊ณผ ์—ด์„ ๋ฐ”๊พผ ์—ฐ์‚ฐ (A>).
  • Symmetric Matrix: ์ „์น˜์™€ ๊ฐ™์€ ์ •๋ฐฉ ํ–‰๋ ฌ (A = A>).
  • Trace: ์ •๋ฐฉ ํ–‰๋ ฌ์˜ ์ฃผ๋Œ€๊ฐ์„  ์š”์†Œ๋“ค์˜ ํ•ฉ (tr(A)).
  • Norm: ๋ฒกํ„ฐ ๋˜๋Š” ํ–‰๋ ฌ์˜ โ€œํฌ๊ธฐโ€๋ฅผ ์ธก์ •ํ•˜๋Š” ํ•จ์ˆ˜. L1, L2, Lโˆž, Frobenius ๋…ธ๋ฆ„ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค.
  • Linear Independence: ๋ฒกํ„ฐ ์ง‘ํ•ฉ ๋‚ด์˜ ์–ด๋–ค ๋ฒกํ„ฐ๋„ ๋‚˜๋จธ์ง€ ๋ฒกํ„ฐ๋“ค์˜ ์„ ํ˜• ๊ฒฐํ•ฉ์œผ๋กœ ํ‘œํ˜„๋  ์ˆ˜ ์—†๋Š” ์†์„ฑ.
  • Rank: ํ–‰๋ ฌ์˜ ์„ ํ˜• ๋…๋ฆฝ์ธ ํ–‰ ๋˜๋Š” ์—ด์˜ ์ตœ๋Œ€ ๊ฐœ์ˆ˜. ํ–‰๋ ฌ์˜ ์ฐจ์› ๊ณต๊ฐ„์„ ์–ผ๋งˆ๋‚˜ ์ž˜ ์ฑ„์šฐ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.
  • Invertible (Non-singular) Matrix: ์—ญํ–‰๋ ฌ์ด ์กด์žฌํ•˜๋Š” ์ •๋ฐฉ ํ–‰๋ ฌ.
  • Inverse Matrix: ํ–‰๋ ฌ A์— ๋Œ€ํ•ด ๊ณฑํ–ˆ์„ ๋•Œ ํ•ญ๋“ฑ ํ–‰๋ ฌ์„ ๋งŒ๋“œ๋Š” ์œ ์ผํ•œ ํ–‰๋ ฌ (A^-1).
  • Orthogonal Vectors: ๋‚ด์ ์ด 0์ธ ๋‘ ๋ฒกํ„ฐ. ์„œ๋กœ ์ˆ˜์ง์ž…๋‹ˆ๋‹ค.
  • Normalized Vector: L2 ๋…ธ๋ฆ„์ด 1์ธ ๋ฒกํ„ฐ.
  • Orthogonal Matrix: ์—ด ๋ฒกํ„ฐ๋“ค์ด ์„œ๋กœ ์ง๊ตํ•˜๋ฉฐ ์ •๊ทœํ™”๋œ(์ง๊ทœ ์ง๊ต) ์ •๋ฐฉ ํ–‰๋ ฌ (U>U = I = UU>). ์—ญํ–‰๋ ฌ์ด ์ „์น˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค.
  • Determinant: ์ •๋ฐฉ ํ–‰๋ ฌ์— ๋Œ€ํ•ด ์ •์˜๋˜๋Š” ์Šค์นผ๋ผ ๊ฐ’. ํ–‰๋ ฌ์ด ๋‚˜ํƒ€๋‚ด๋Š” ์„ ํ˜• ๋ณ€ํ™˜์— ์˜ํ•ด ๊ณต๊ฐ„์˜ ๋ถ€ํ”ผ๊ฐ€ ์–ผ๋งˆ๋‚˜ ์Šค์ผ€์ผ๋ง๋˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ–‰๋ ฌ์ด ํŠน์ด ํ–‰๋ ฌ์ผ ๋•Œ ํ–‰๋ ฌ์‹์€ 0์ž…๋‹ˆ๋‹ค.
  • Quadratic Form: ์ •๋ฐฉ ํ–‰๋ ฌ A์™€ ๋ฒกํ„ฐ x์— ๋Œ€ํ•ด x>Ax ํ˜•ํƒœ๋กœ ํ‘œํ˜„๋˜๋Š” ์Šค์นผ๋ผ ๊ฐ’.
  • Positive Definite (PD) Matrix: ๋Œ€์นญ ํ–‰๋ ฌ์ด๋ฉฐ, ๋ชจ๋“  0์ด ์•„๋‹Œ ๋ฒกํ„ฐ x์— ๋Œ€ํ•ด x>Ax > 0์ธ ํ–‰๋ ฌ.
  • Positive Semidefinite (PSD) Matrix: ๋Œ€์นญ ํ–‰๋ ฌ์ด๋ฉฐ, ๋ชจ๋“  ๋ฒกํ„ฐ x์— ๋Œ€ํ•ด x>Ax โ‰ฅ 0์ธ ํ–‰๋ ฌ.
  • Eigenvalue: ํ–‰๋ ฌ A๋ฅผ ๊ณฑํ–ˆ์„ ๋•Œ ๋ฒกํ„ฐ์˜ ๋ฐฉํ–ฅ์€ ์œ ์ง€ํ•˜๊ณ  ํฌ๊ธฐ๋งŒ ฮป๋งŒํผ ์Šค์ผ€์ผ๋ง๋˜๋Š” ์Šค์นผ๋ผ ๊ฐ’ (Ax = ฮปx).
  • Eigenvector: ๊ณ ์œ ๊ฐ’์— ํ•ด๋‹นํ•˜๋Š” 0์ด ์•„๋‹Œ ๋ฒกํ„ฐ (Ax = ฮปx).
  • Gradient: ๋‹ค๋ณ€์ˆ˜ ํ•จ์ˆ˜์˜ ๊ฐ ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ํŽธ๋ฏธ๋ถ„ ๊ฐ’๋“ค์„ ๋ฒกํ„ฐ๋กœ ๋ชจ์•„ ๋†“์€ ๊ฒƒ (โˆ‡f(x)). ํ•จ์ˆ˜์˜ ์ตœ๋Œ€ ์ฆ๊ฐ€ ๋ฐฉํ–ฅ์„ ๊ฐ€๋ฆฌํ‚ต๋‹ˆ๋‹ค.
  • Hessian: ๋‹ค๋ณ€์ˆ˜ ํ•จ์ˆ˜์˜ ์ด์ฐจ ํŽธ๋ฏธ๋ถ„ ๊ฐ’๋“ค์„ ํ–‰๋ ฌ๋กœ ๋ชจ์•„ ๋†“์€ ๊ฒƒ (โˆ‡ยฒf(x)). ํ•จ์ˆ˜์˜ ๋ณผ๋ก์„ฑ ๋˜๋Š” ์˜ค๋ชฉ์„ฑ์„ ํŒ๋‹จํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
  • Random Variable: ์‹คํ—˜์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜์น˜์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ด๋Š” ํ•จ์ˆ˜.
  • Cumulative Distribution Function (CDF): ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ํŠน์ • ๊ฐ’๋ณด๋‹ค ์ž‘๊ฑฐ๋‚˜ ๊ฐ™์„ ํ™•๋ฅ ์„ ๋‚˜ํƒ€๋‚ด๋Š” ํ•จ์ˆ˜ (F_X(x) = P{X โ‰ค x}).
  • Probability Density Function (PDF): ์—ฐ์† ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๋ถ„ํฌ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ํ•จ์ˆ˜. CDF์˜ ๋ฏธ๋ถ„์œผ๋กœ ์–ป์–ด์ง‘๋‹ˆ๋‹ค (p_X(x) = dF_X(x)/dx).
  • Independent Random Variables: ๋‘ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๊ฒฐํ•ฉ ๋ถ„ํฌ๊ฐ€ ๊ฐ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ์ฃผ๋ณ€ ๋ถ„ํฌ์˜ ๊ณฑ์œผ๋กœ ํ‘œํ˜„๋  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ.
  • Expected Value (Mean): ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ โ€œํ‰๊ท โ€ ๊ฐ’. ์ด๋ก ์ ์ธ ๋ถ„ํฌ์— ๊ธฐ๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค (E[X] = โˆซx p_X(x) dx).
  • Variance: ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ํ‰๊ท  ์ฃผ๋ณ€์— ์–ผ๋งˆ๋‚˜ ํผ์ ธ ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ธก๋„ (Var(X) = E[ X - E[X] ^2]).
  • Correlation: ๋‘ ํ™•๋ฅ  ๋ณ€์ˆ˜ ์‚ฌ์ด์˜ ์„ ํ˜• ๊ด€๊ณ„ ๊ฐ•๋„๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ธก๋„ (E[XY*]).
  • Covariance: ๋‘ ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ํ•จ๊ป˜ ๋ณ€ํ•˜๋Š” ์ •๋„๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ธก๋„ (Cov(X,Y) = E[(X - E[X])(Y - E[Y])*]).
  • Random Vector: ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ™•๋ฅ  ๋ณ€์ˆ˜๋ฅผ ๋ฌถ์–ด์„œ ๋ฒกํ„ฐ๋กœ ํ‘œํ˜„ํ•œ ๊ฒƒ.
  • Multivariate Gaussian (Normal) Distribution: ์—ฌ๋Ÿฌ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๊ฒฐํ•ฉ ๋ถ„ํฌ๊ฐ€ ๊ฐ€์šฐ์‹œ์•ˆ ๋ถ„ํฌ๋ฅผ ๋”ฐ๋ฅด๋Š” ๊ฒฝ์šฐ. ํ‰๊ท  ๋ฒกํ„ฐ์™€ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค.
  • Jointly Gaussian Random Vector: ์ž„์˜์˜ ์„ ํ˜• ๋ณ€ํ™˜์„ ํ–ˆ์„ ๋•Œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฐ€์šฐ์‹œ์•ˆ ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ๋˜๋Š” ๋žœ๋ค ๋ฒกํ„ฐ. ๊ฐ ์š”์†Œ๊ฐ€ ๊ฐ€์šฐ์‹œ์•ˆ์ด๋”๋ผ๋„ ๊ฒฐํ•ฉ ๊ฐ€์šฐ์‹œ์•ˆ์ด ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  • Random Process: ์‹œ๊ฐ„์— ๋”ฐ๋ผ ๋˜๋Š” ๊ณต๊ฐ„์— ๋”ฐ๋ผ ๊ฐ’์ด ๋ณ€ํ•˜๋Š” ํ™•๋ฅ  ๋ณ€์ˆ˜๋“ค์˜ ๋ชจ์ž„. ์ด ์—ฐ๊ตฌ ๊ฐ€์ด๋“œ์—์„œ๋Š” 2์ฐจ์› ์ด์‚ฐ ๊ณต๊ฐ„์—์„œ์˜ ๋žœ๋ค ํ”„๋กœ์„ธ์Šค๋ฅผ ๋‹ค๋ฃน๋‹ˆ๋‹ค.
  • Power Spectrum (Power Spectral Density): ๊ด‘์˜ ์ •์ƒ์„ฑ(Wide Sense Stationary, WSS) ๋žœ๋ค ํ”„๋กœ์„ธ์Šค์˜ ์ž๊ธฐ ์ƒ๊ด€ ํ•จ์ˆ˜(autocorrelation function)์˜ ์ด์‚ฐ ๊ณต๊ฐ„ ํ‘ธ๋ฆฌ์— ๋ณ€ํ™˜(DSFT). ๋žœ๋ค ํ”„๋กœ์„ธ์Šค์˜ ์ฃผํŒŒ์ˆ˜ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.
  • Loss Function: ๊ธฐ๊ณ„ ํ•™์Šต ๋ชจ๋ธ์˜ ์˜ˆ์ธก๊ณผ ์‹ค์ œ ๊ฐ’ ์‚ฌ์ด์˜ ๋ถˆ์ผ์น˜ ์ •๋„๋ฅผ ์ธก์ •ํ•˜๋Š” ํ•จ์ˆ˜ (L(D(x), y)).
  • Supervised Learning: ๋ชจ๋ธ ํ•™์Šต์— ์ž…๋ ฅ ๋ฐ์ดํ„ฐ(x)์™€ ํ•ด๋‹น ์ •๋‹ต ๋ฐ์ดํ„ฐ(y)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ์‹.
  • Empirical Loss Function: ์‹ค์ œ ํ•™์Šต ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ์— ๋Œ€ํ•œ ์†์‹ค ํ•จ์ˆ˜์˜ ํ‰๊ท  ๋˜๋Š” ํ•ฉ. ๋ถ„์„์  ์†์‹ค ํ•จ์ˆ˜์˜ ๊ทผ์‚ฌ๊ฐ’์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
  • Gradient Descent: ํ•จ์ˆ˜์˜ ๊ทน์†Œ๊ฐ’์„ ์ฐพ๊ธฐ ์œ„ํ•ด ํ˜„์žฌ ์œ„์น˜์—์„œ ํ•จ์ˆ˜์˜ ์Œ์ˆ˜ ๊ธฐ์šธ๊ธฐ ๋ฐฉํ–ฅ์œผ๋กœ ์ผ์ • ๊ฑฐ๋ฆฌ๋งŒํผ ์ด๋™ํ•˜๋Š” ์ตœ์ ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜.
  • Learning Rate (Step Size): ๊ทธ๋ž˜๋””์–ธํŠธ ๋””์„ผํŠธ์—์„œ ๊ฐ ๋‹จ๊ณ„์—์„œ ์ด๋™ํ•˜๋Š” ๊ฑฐ๋ฆฌ.
  • Stochastic Gradient Descent (SGD): ์ „์ฒด ํ•™์Šต ๋ฐ์ดํ„ฐ ๋Œ€์‹  ๋ฌด์ž‘์œ„๋กœ ์„ ํƒ๋œ ์ž‘์€ ๋ฐฐ์น˜(batch)์˜ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๋Š” ์ตœ์ ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜.
  • Batch Size: SGD์—์„œ ๊ฐ ์—…๋ฐ์ดํŠธ ๋‹จ๊ณ„์— ์‚ฌ์šฉ๋˜๋Š” ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ์˜ ๊ฐœ์ˆ˜.
  • Local Minimum: ํ•จ์ˆ˜์˜ ๊ทธ๋ž˜๋””์–ธํŠธ๊ฐ€ 0์ด๊ณ  ํ—ค์„ธ ํ–‰๋ ฌ์ด ์–‘์˜ ์ •๋ถ€ํ˜ธ ํ–‰๋ ฌ์ธ ์ง€์ .
  • Overfitting: ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์—๋Š” ๋งค์šฐ ์ž˜ ๋งž์ง€๋งŒ ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ๋Š” ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š” ํ˜„์ƒ.
  • Underfitting: ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด์„ ์ œ๋Œ€๋กœ ํ•™์Šตํ•˜์ง€ ๋ชปํ•ด ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋ชจ๋‘์—์„œ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ํ˜„์ƒ.
  • Regularization: ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ถ”๊ฐ€ํ•˜๋Š” ๊ธฐ๋ฒ•.
  • No Free Lunch Theorem: ๋ชจ๋“  ์ข…๋ฅ˜์˜ ๋ฌธ์ œ์— ๋Œ€ํ•ด ์ตœ์ ์œผ๋กœ ์ž‘๋™ํ•˜๋Š” ๋‹จ์ผ ๊ธฐ๊ณ„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์—†๋‹ค๋Š” ์ •๋ฆฌ.
  • Cross-Validation: ๋ฐ์ดํ„ฐ๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํด๋“œ(fold)๋กœ ๋‚˜๋ˆ„์–ด ์ผ๋ถ€๋Š” ํ•™์Šต์— ์‚ฌ์šฉํ•˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ํ‰๊ฐ€์— ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ณ  ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ์ถ”์ •ํ•˜๋Š” ๊ธฐ๋ฒ•.
  • Classification: ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฏธ๋ฆฌ ์ •์˜๋œ ์—ฌ๋Ÿฌ ๋ฒ”์ฃผ(ํด๋ž˜์Šค) ์ค‘ ํ•˜๋‚˜๋กœ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฌธ์ œ.
  • Regression: ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ์—ฐ์†์ ์ธ ์ถœ๋ ฅ ๊ฐ’(์‹ค์ˆ˜ ๊ฐ’)์„ ์˜ˆ์ธกํ•˜๋Š” ๋ฌธ์ œ.
  • Density Estimation: ์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ์˜ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ๋ชจ๋ธ๋งํ•˜๊ฑฐ๋‚˜ ์ถ”์ •ํ•˜๋Š” ๋ฌธ์ œ.
  • Object Detection: ์ด๋ฏธ์ง€ ๋˜๋Š” ๋น„๋””์˜ค์—์„œ ๊ฐ์ฒด์˜ ์œ„์น˜๋ฅผ ์ฐพ๊ณ  ํ•ด๋‹น ๊ฐ์ฒด๊ฐ€ ์–ด๋–ค ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€ ์‹๋ณ„ํ•˜๋Š” ๋ฌธ์ œ (๋ถ„๋ฅ˜์™€ ํšŒ๊ท€์˜ ์กฐํ•ฉ).
  • Self-Supervised Learning: ์ •๋‹ต ๋ผ๋ฒจ ์—†์ด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์ž์ฒด์—์„œ ํ•™์Šต ์‹ ํ˜ธ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๋Š” ๋ฐฉ์‹.