research

27 March 2026

Shared Global and Local Geometry of Language Model Embeddings

COLM'25

๐Ÿ’ก๊ฐ™์€ ๊ณ„์—ด์˜ ์–ธ์–ด ๋ชจ๋ธ๋“ค์€ ์ฐจ์›์ด ๋‹ฌ๋ผ๋„ token embedding์˜ ๊ตฌ์กฐ๊ฐ€ ๊ต‰์žฅํžˆ ๋น„์Šทํ•˜๋‹ค! ๊ทธ๋ž˜์„œ, ํ•œ ๋ชจ๋ธ์—์„œ ๋งŒ๋“ค์–ด๋‚ธ steering vector๋ฅผ ๋‹ค๋ฅธ ๋ชจ๋ธ์—์„œ ์„ ํ˜•๋ณ€ํ™˜๋งŒ์œผ๋กœ ์žฌ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค!์˜ˆ: 1B, 3B์—์„œ helpfulness๋ฅผ ์˜ฌ๋ฆฌ๋„๋ก ํ•˜๋Š” vector๋ฅผ ์ฐพ๊ณ  ๋‚˜์„œ, 8B๋กœ ๊ทธ๋Œ€๋กœ ์˜ฎ๊ฒจ์„œ ์“ธ ์ˆ˜ ์žˆ์Œ!

27 March 2026

FRESH IN MEMORY: TRAINING-ORDER RECENCY IS LIN-EARLY ENCODED IN LANGUAGE MODEL ACTIVATIONS

ICLR'26 Poster

๐Ÿ’ก์–ธ์–ด ๋ชจ๋ธ์€ โ€œ๋ฌด์—‡โ€ ์„ ๋ฐฐ์› ๋Š”์ง€์™€ โ€œ์–ธ์ œโ€ ๋ฐฐ์› ๋Š”์ง€์— ๋Œ€ํ•ด ์•Œ๊ณ ์žˆ๋‹ค.โ‡’ ๋‹ค์–‘ํ•œ ํ†ต์ œ ์‹คํ—˜์„ ํ†ตํ•ด ๊ฒ€์ฆํ•ด๋ณด์ž ! !

์—ผ๊ทœํ™˜
26 March 2026

TROLL: Trust Regions Improve Reinforcement Learning for Large Language Models

ICLR'26 Oral

๐Ÿ’กLLM์„ RL๋กœ ํ•™์Šตํ•  ๋•Œ ๋ชจ๋ธ์ด ํ•œ ๋ฒˆ์— ๋„ˆ๋ฌด ํฌ๊ฒŒ ๋ฐ”๋€Œ๋ฉด ๋ง๊ฐ€์ง€๋ฏ€๋กœ, ํ—ˆ์šฉ๋œ ๋ฒ”์œ„ ์•ˆ์—์„œ๋งŒ ์—…๋ฐ์ดํŠธํ•ด์„œ ์•ˆ์ „ํ•˜๊ฒŒ ํ•™์Šต์‹œํ‚ค์ž

26 March 2026

SEAL: Steerable Reasoning Calibration of Large Language Models for Free

COLM'25

๐Ÿ’ก๋„ˆ๋ฌด ๊ธธ๊ณ  ๋ณต์žกํ•œ reasoning ๊ฒฝํ–ฅ์„ ์™„ํ™”ํ•˜์ž!โ‡’ reasoning process๋ฅผ ์„ธ๋‹จ๊ณ„๋กœ ๋ถ„๋ฅ˜ํ•˜๊ณ , ๊ทธ ์ค‘์— ์–ด๋–ค ๊ฑธ ์ค„์—ฌ์•ผ ํ• ์ง€ ๋ถ„์„ํ•˜์ž

26 March 2026

Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models

COLM'25

๐Ÿ’กRefusal token์œผ๋กœ ๋ชจ๋ธ์˜ ์‘๋‹ต ๊ฑฐ์ ˆ์„ ๋” ์„ฌ์„ธํ•˜๊ณ (์„ฑ๋Šฅโ†‘), ์œ ์—ฐํ•˜๊ฒŒ(inference ๋‹จ์—์„œ ์กฐ์ ˆ ๊ฐ€๋Šฅ) ํ•œ๋‹ค!

์ด๋‘ํ˜ธ
26 March 2026

LoongRL: Reinforcement Learning for Advanced Reasoning over Long Contexts

ICLR'26 Oral

๐Ÿ’กshort-context(16K) RL ํ•™์Šต๋งŒ์œผ๋กœ long-context(128K) ์ถ”๋ก ์„ ์ž˜ํ•˜๊ฒŒ ํ•˜์ž.์–ด๋–ป๊ฒŒ?โ‡’ UUID ์ฒด์ธ์œผ๋กœ ์งˆ๋ฌธ์„ ์ˆจ๊ธด ๊ณ ๋‚œ์ด๋„ ํ•ฉ์„ฑ ๋ฐ์ดํ„ฐ(KeyChain)๋กœ RL ํ•™์Šตํ•˜๋ฉด, planโ€“retrieveโ€“reasonโ€“recheck ์‚ฌ๊ณ  ํŒจํ„ด์ด ๋ฐœ์ƒํ•˜์—ฌ ๋†’์€ ์žฅ๋ฌธ ์ถ”๋ก  ์„ฑ๋Šฅ์„ 7B/14B์˜ ์†Œํ˜• ๋ชจ๋ธ๋กœ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค.

์ด์Šนํ™˜
26 March 2026

Language Model Personalization via Reward Factorization

COLM'25

๐Ÿ’ก์—ฌ๋Ÿฌ ์‚ฌ์šฉ์ž์˜ ์„ ํ˜ธ๋ฅผ ๊ณตํ†ต๋œ ์„ ํ˜ธ ์ถ•(e.g., ์นœ์ ˆ, ๊ฐ„๊ฒฐ, ๊ฒฉ์‹)์œผ๋กœ ๋ถ„ํ•ดํ•ด ํ•™์Šตํ•œ ๋’ค, ์ƒˆ๋กœ์šด ์‚ฌ์šฉ์ž๊ฐ€ ๋“ค์–ด์˜ค๋ฉด ์ถ•๋งˆ๋‹ค ๋‹ค๋ฅธ ๊ฐ€์ค‘์น˜๋ฅผ ์ฃผ์–ด ์‚ฌ์šฉ์ž์˜ personalized๋œ ์„ ํ˜ธ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ถ”์ •ํ•˜์ž!

26 March 2026

Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

COLM'25

๐Ÿ’กMathematical Reasoning Task ๋ฅผ ํ•  ๋•Œ, RL์„ ๊ฐ„์ ‘์ ์œผ๋กœ ๊ตฌํ˜„ํ•˜์—ฌ ๊ฐ„๋‹จํ•˜๊ฒŒ ํ’€์–ด๋ณด์ž.(= ๊ฐ•ํ™”ํ•™์Šต ํ˜•ํƒœ๋กœ ์ˆ˜ํ•™๋ฌธ์ œ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํ’€์–ด๋ณด์ž !)

์ตœ๋ฏผ์˜
26 March 2026

Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate

COLM'25

๐Ÿ’ก์ •๋‹ต์„ ๊ทธ๋Œ€๋กœ ๋ชจ๋ฐฉํ•˜๋Š” SFT๋ณด๋‹ค, noisyํ•œ ๋‹ต์•ˆ์„ โ€˜๋น„ํŒ(critique)โ€™ํ•˜๋„๋ก ํ•™์Šตํ•˜๋Š” ๋ฐฉ๋ฒ•์ด reasoning ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ๋” ํšจ๊ณผ์ ์ด๋‹ค!Human learning process์˜ ๋ฐฉ์‹(critical thinking, analyze, understandingโ€ฆ)์„ ๋ชจ๋ธ ํ•™์Šต์— ์ ์šฉํ•ด๋ณด์ž

26 March 2026

Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games

COLM'25

๐Ÿ’กํ˜„์žฌ์˜ ์ถ”๋ก  ์ตœ์ ํ™”๊ฐ€ ํ˜‘๋ ฅ์„ ๋ณ„๋„๋กœ ์ •๋ ฌ์‹œํ‚ค์ง€ ์•Š๋Š”๋‹ค๋ฉด, ํ˜‘๋ ฅ์ด ์•„๋‹Œ ํ•ฉ๋ฆฌ์  ์ด๊ธฐ์ฃผ์˜๋ฅผ ํ‘œ๋ฐฉํ•˜๋Š” ๊ฐœ์ธ์ฃผ์˜ ๋ชจ๋ธ์ด ํƒ„์ƒํ•  ์ˆ˜ ์žˆ๋‹ค!์ฆ‰, ์ถ”๋ก  ๋Šฅ๋ ฅ๊ณผ, ํ˜‘์—… ๋Šฅ๋ ฅ(๋น„์šฉ ๊ฐ์ˆ˜ ์ธก๋ฉด)์€ ๋ณ„๊ฐœ๋‹ค!

19 March 2026

Why DPO is a Misspecified Estimator and How to Fix It

ICLR'26 Oral

๐Ÿ’กDPO์˜ ์ „์ œ๊ฐ€ realisticํ•˜์ง€ ์•Š์Œ์„ ์œ„์ƒํ•™์ ์œผ๋กœ ํŒŒํ—ค์นจ AuxDPO๋ฅผ ํ†ตํ•ด DPO์˜ Misspecifection๋ฅผ ์™„ํ™”ํ•˜์ž!

์ด์Šนํ™˜
19 March 2026

Whatโ€™s In My Human Feedback? Learning Interpretable Descriptions of Preference Data

ICLR'26 Oral

๐Ÿ’กSAE๋ฅผ ํ†ตํ•ด preference dataset์—์„œ ๋‘ ์‘๋‹ต ๊ฐ„ ์„ ํ˜ธ๋ฅผ ๊ฒฐ์ •์ง“๋Š” ์ž ์žฌ์  ํŠน์ง•(feature) ์ถ•์„ ์ž๋™์œผ๋กœ ์ถ”์ถœํ•˜๊ณ , ์–ด๋–ค ์‘๋‹ต ํŠน์„ฑ์ด ์ธ๊ฐ„์˜ ์„ ํ˜ธ๋ฅผ ๊ฒฐ์ •ํ•˜๋Š”์ง€ ์ž์—ฐ์–ด๋กœ ํ•ด์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋Š” WIMHF ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆ

์ตœ๋ฏผ์˜
19 March 2026

SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety

ICLR'26 Oral

๐Ÿ’กPreference Alignment์—์„œ ์•ˆ์ „(์œ„ํ—˜ํ•œ ๋‹ตX)์„ ๊ฐ•ํ•˜๊ฒŒ ๋ณด์žฅํ•˜๋ฉด์„œ๋„, ๊ธฐ์กด RLHF์ฒ˜๋Ÿผ ๋ณต์žกํ•œ ํŒŒ์ดํ”„๋ผ์ธ ์—†์ด DPO์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋ชจ๋ธ์„ ์ •๋ ฌํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ SafeDPO ๋ฅผ ์ œ์‹œ๊ธฐ์กด์˜ ๋ณด์ƒ ํ•จ์ˆ˜๋ฅผ ์žฌ์ •์˜ํ•˜๊ณ , ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ์žฌ์ •๋ ฌํ•ด ๋ชจ๋ธ์ด ์•ˆ์ „ํ•œ ๋‹ต์„ ์ผ๊ด€๋˜๊ฒŒ ๋” ์„ ํ˜ธํ•˜๋„๋ก ํ•จ

์—ผ๊ทœํ™˜
19 March 2026

OrthAlign: Orthogonal Subspace Decomposition for Non-Interfering Multi-Objective Alignment

ICLR'26 Poster

๐Ÿ’ก๋‹ค์ค‘ preference ์ตœ์ ํ™” ์‹œ ํŒŒ๋ผ๋ฏธํ„ฐ ์—…๋ฐ์ดํŠธ ๊ณต๊ฐ„์„ orthogonal subspace๋กœ ๋ถ„ํ•ดํ•˜์—ฌ, objective ๊ฐ„ ๊ฐ„์„ญ์„ ์›์ฒœ์ ์œผ๋กœ ์ œ๊ฑฐํ•˜์ž

19 March 2026

Multiplayer Nash Preference Optimization

ICLR'26 Poster

๐Ÿ’กalignment๊ฐ€ ๊ฐ€์ ธ์•ผ ํ•  ๋ชฉํ‘œ๋Š” ๋ณด์ƒ์„ ์ตœ๋Œ€ํ™”ํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ๋‹ค์ˆ˜ ๊ฐ€์น˜ ๋ฐ ์ •์ฑ… ์ง‘๋‹จ ์†์—์„œ ๊ทธ ๋ˆ„๊ตฌ์—๊ฒŒ๋„ ์ง€์ง€ ์•Š๋Š” ์•ˆ์ •์  ๊ท ํ˜• ์ƒํƒœ๋ฅผ ๊ฐ€์ง€๋Š” ๊ฒƒ์ด๋‹ค!

19 March 2026

How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence

COLM'25

๐Ÿ’กPost-training ํ›„ ๋ชจ๋ธ ๋‚ด๋ถ€ ์ง€์‹, ์ง„์‹ค์„ฑ, ์•ˆ์ „์„ฑ, ํ™•์‹ ์„ฑ์˜ ๋ณ€ํ™”๋ฅผ ๊ธฐ๊ณ„์ ์œผ๋กœ ๋ถ„์„!

19 March 2026

EigenBench: A Comparative Behavioral Measure of Value Alignment

ICLR'26 Oral

๐Ÿ’ก๋ชจ๋ธ์˜ ์ฃผ๊ด€์  ์„ฑํ–ฅ์„ ๋‹ค๋ฅธ ๋ชจ๋ธ์˜ ์„ฑํ–ฅ๊ณผ ๋น„๊ตํ•˜์—ฌ ์ˆœ์œ„๋ฅผ ๋งค๊ธฐ๊ณ , ์‹ ๋ขฐ๋„ ๋ฒกํ„ฐ๋กœ ์ˆ˜์น˜ํ™”ํ•˜์—ฌ ์‹ ๋ขฐ์„ฑ์„ ํŒ๋‹จํ•˜๊ณ , ๋ชจ๋ธ๋งˆ๋‹ค ํŒ๋‹จ์˜ ๊ธฐ์ค€ ์ฐจ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค!

์ด๋‘ํ˜ธ
19 March 2026

Diffusion Alignment as Variational Expectation-Maximization

ICLR'26 Poster

๐Ÿ’กDiffusion ๋ชจ๋ธ์„ ๋ชฉ์  ํ•จ์ˆ˜์— ๋งž๊ฒŒ diffusion alignmentํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋Š” reward over-optimization ๊ณผ mode collapse ๋ฌธ์ œ๋ฅผ EM์•Œ๊ณ ๋ฆฌ์ฆ˜ (E๋‹จ๊ณ„(test time search) โ†’ M๋‹จ๊ณ„(forward-KL)์˜ ๋ฐ˜๋ณต)์œผ๋กœ ํ•ด๊ฒฐํ•˜์ž!

19 March 2026

Beyond Pairwise: Empowering LLM Alignment With (Ranked) Choice Modeling

ICLR'26 Poster

๐Ÿ’กRLHF๋‚˜ DPO์™€ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์€ Pairwise(์Œ) Preference Optimization์— ๋งž์ถฐ์ ธ ์žˆ์–ด, ๋” ์ž์„ธํ•œ ์ •๋ณด(Human Feedback)๋ฅผ ํ•™์Šตํ•  ๊ธฐํšŒ๋ฅผ ๊ฐ„๊ณผํ•œ๋‹ค.โ‡’ Response์— ๋Œ€ํ•ด Pairwise๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ทธ ์ด์ƒ๊นŒ์ง€ rank๋ฅผ ๋งค๊ฒจ ๋ชจ๋ธ์— ํ•™์Šต์„ ์‹œ์ผœ๋ณด์ž.

์—ผ๊ทœํ™˜
21 January 2026

Training a Generally Curious Agent

ICML'25

๐Ÿ’ก๋‚ด์žฌ์  ๋ณด์ƒ ์—†์ด๋„, LLM์ด ๋‹ค์–‘ํ•œ synthetic ์ƒํ˜ธ์ž‘์šฉ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ์ •๋ณด๋ฅผ ์Šค์Šค๋กœ ๋ชจ์œผ๊ณ , ๋‹จ๊ณ„๋ณ„๋กœ ํŒ๋‹จํ•˜๋ฉฐ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šฐ๊ฒŒ ํ•˜์ž!

21 January 2026

On LLM-Based Scientific Inductive Reasoning Beyond Equations

EMNLP'25

๐Ÿ’กํ˜„์žฌ LLM์€ โ€œ๋ฐฉ์ •์‹(์ˆ˜์‹)์œผ๋กœ ํ‘œํ˜„๋˜์ง€ ์•Š๋Š” ๊ณผํ•™์  ๊ทœ์น™โ€์„ ๊ด€์ฐฐ๋กœ๋ถ€ํ„ฐ ๊ท€๋‚ฉ์ ์œผ๋กœ ๋ฐœ๊ฒฌํ•˜๋Š” ๋ฐ ๊ทผ๋ณธ์ ์œผ๋กœ ์•ฝํ•˜๋‹ค.์ด๋ฅผ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•ด ์ €์ž๋“ค์€ SIRBench-V1 ์ด๋ผ๋Š” ์ƒˆ๋กœ์šด ๋ฒค์น˜๋งˆํฌ๋ฅผ ๋งŒ๋“ค์—ˆ๊ณ , ์ตœ์‹  LLM๋“ค๋„ ๋Œ€๋ถ€๋ถ„ ๋‚ฎ์€ ์ •ํ™•๋„(๋ฝํ•ด์•ผ 45%) ์— ๋จธ๋ฌธ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€๋‹ค.

์ด์Šนํ™˜
21 January 2026

MAP: Multi-Human-Value Alignment Palette

ICLR'25

๐Ÿ’ก๋‹ค์ค‘ ๊ฐ€์น˜ ์ •๋ ฌ์„ ๊ธฐ์กด์˜ ๊ฐ€์ค‘์น˜ ํŠœ๋‹ ๋ฐฉ์‹์ด ์•„๋‹ˆ๋ผ ์›ํ•˜๋Š” ์ˆ˜์ค€์˜ ๋ชฉํ‘œ(palette)๋ฅผ ๋จผ์ € ์ง€์ •ํ•˜๊ณ , ๊ทธ ๋ชฉํ‘œ๋ฅผ ๋งŒ์กฑํ•˜๋Š” ฮป๋ฅผ ์ž๋™์œผ๋กœ ์ฐพ์•„ Pareto ๊ฐœ์„ ์„ ๋ณด์žฅํ•˜๋Š” ์ •๋ ฌ๋กœ ๋ฐ”๊ฟ”๋ณด์ž!

21 January 2026

LLMs Encode Harmfulness and Refusal Separately

NIPS'25

๐Ÿ’กLLM์€ instruction์˜ ์œ ํ•ด์„ฑ๊ณผ ๊ฑฐ๋ถ€ ์—ฌ๋ถ€๋ฅผ ๋‹ค๋ฅธ latent space์—์„œ ์ธ์ฝ”๋”ฉํ•˜๊ณ  ์žˆ๋‹ค!

์ตœ๋ฏผ์˜
21 January 2026

From Trade-off to Synergy: A Versatile Symbiotic Watermarking Framework for Large Language Models

ACL'25

๐Ÿ’ก๋‘ ๊ฐ€์ง€ ๊ธฐ์ค€์˜ ์—”ํŠธ๋กœํ”ผ ๊ฐ’์— ๋”ฐ๋ผ logits ๊ธฐ๋ฐ˜๊ณผ sampling ๊ธฐ๋ฐ˜ ์›Œํ„ฐ๋งˆํ‚น์„ ์„ ํƒ์ ์œผ๋กœ ์ ์šฉํ•˜๋Š” Symbiotic Watermarking ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ œ์•ˆ

21 January 2026

Curriculum Debiasing: Toward Robust Parameter-Efficient Fine-Tuning Against Dataset Biases

ACL'25

๐Ÿ’กPEFT๋กœ ํ•™์Šตํ•  ๋•Œ biased example์— overfitting๋˜๋Š” ๊ฒฝํ–ฅ ์กด์žฌํ•จ (biased example์— ๋” ๋น ๋ฅด๊ฒŒ ์ˆ˜๋ ดํ•˜๊ธฐ ๋•Œ๋ฌธ) โ‡’ ํ•™์Šต ๋ฐ์ดํ„ฐ ์ˆœ์„œ๋ฅผ biased-to-unbiased ๋กœ ์ œ์‹œํ•ด์„œ, ์ด๋ฅผ ์™„ํ™”ํ•˜์ž!

21 January 2026

Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models

ICML'25

๐Ÿ’กLLM์˜ ๋…ผ๋ฆฌ์  ์„ ํ˜ธ๋„ ์ผ๊ด€์„ฑ์„ ์ •์˜ํ•˜๊ณ , ๊ด€๋ จ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ฐฉ์‹์„ ์ œ์•ˆํ•˜์—ฌ, ๋…ผ๋ฆฌ ์„ ํ˜ธ๋„ ์ผ๊ด€์„ฑ๊ณผ ๋…ผ๋ฆฌ ํƒœ์Šคํฌ ์ˆ˜ํ–‰๋Šฅ๋ ฅ ์ฆ์ง„

14 January 2026

Understanding and Mitigating Numerical Sources of Nondeterminism in LLM Inference

NIPS'25

๐Ÿ’กLLM ์ถ”๋ก ์€ ๊ณ„์‚ฐ ๊ณผ์ •์—์„œ์˜ ์˜ค์ฐจ๋กœ ์ธํ•˜์—ฌ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Œ! โ‡’ ์ •๋ฐ€๋„ ๊ด€์ ์—์„œ ์žฌํ•ด์„, ์‹ค์ œ๋กœ ์–ผ๋งˆ๋‚˜ ๋‹ฌ๋ผ์ง€๋Š”์ง€, ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•ด์•ผ ํ•˜๋Š”์ง€?๊ณ„์‚ฐ ๊ณผ์ •์—์„œ์˜ ๋ฌธ์ œ๋‹ˆ๊นŒ, ๊ณ„์‚ฐ ๊ณผ์ •์—์„œ๋งŒ ๋” ์ •ํ™•ํ•˜๊ฒŒ ๋ณด๋ฉด ๋˜๋Š”๊ฑฐ ์•„๋‹๊นŒ?โ‡’ ์‹คํ—˜ ๊ฒฐ๊ณผ, ๊ทธ๋ ‡๋‹ค!

14 January 2026

S1: Simple Test-time Scaling

EMNLP'25

๐Ÿ’กtraining ๋‹จ๊ณ„์—์„œ ๋ง๊ณ , inference ๋‹จ๊ณ„์—์„œ ์„ฑ๋Šฅ์„ ๋†’ํžˆ๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ?โ‡’ ์ผ๋‹จ ์ˆ˜ํ•™/์ถ”๋ก  ๋ฌธ์ œ๋Š” token ๊ฐœ์ˆ˜ ์กฐ์ •ํ•ด

์—ผ๊ทœํ™˜
14 January 2026

Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models

NIPS'25

๐Ÿ’ก๋ชจ๋ธ์— ๋…ธ์ด์ฆˆ๋ฅผ ์ฃผ์ž…ํ–ˆ์„ ๋•Œ ์„ฑ๋Šฅ์ด ๋น„์ •์ƒ์ ์œผ๋กœ ํ–ฅ์ƒ๋˜๋ฉด, ์ด๋Š” ์ƒŒ๋“œ๋ฐฐ๊น… ํ˜„์ƒ์„ ์•”์‹œํ•œ๋‹ค!

14 January 2026

Let LRMs Break Free from Overthinking via Self-Braking Tuning

NIPS'25

๐Ÿ’ก๋ชจ๋ธ ๋‚ด์žฌ์ ์œผ๋กœ ๋ถˆํ•„์š”ํ•œ ์ถ”๋ก (์˜ค๋ฒ„ ๋ตํ‚น)์„ ๋ง‰์ž!

14 January 2026

Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations

NIPS'25

๐Ÿ’กLLM์ด ์ž์‹ ์˜ ๋ชจ๋ธ ๋‚ด๋ถ€์—์„œ ์ผ์–ด๋‚˜๋Š” ์ƒํƒœ๋ฅผ ์–ผ๋งˆ๋‚˜ ์ธ์‹, ํ‰๊ฐ€, ์กฐ์ ˆํ•  ์ˆ˜ ์žˆ๋Š”์ง€๋ฅผ โ€˜Neurofeedbackโ€™ (๋ชจ๋ธ์˜ ๋‚ด๋ถ€ ๋ ˆ์ด์–ด, ๋ฒกํ„ฐ ์กฐ์ • ๋ฐ ํ™œ์„ฑํ™” ์ •๋„ ์ธก์ •)๋ฐฉ์‹์œผ๋กœ ์ธก์ •ํ•˜์˜€๊ณ , ๊ทธ ๋Šฅ๋ ฅ์ด ์ œํ•œ์ ์ž„์„ ๋ณด์ž„

์ด์Šนํ™˜
14 January 2026

Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment

ICLR'25

๐Ÿ’กSpeculative Decoding์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ณ‘๋ชฉ์ด Target model์˜ ์ •๋ ฌ(alignment) ๊ธฐ๋ฐ˜ ๊ฒ€์ฆ ๋•Œ๋ฌธ์ž„์„ ๋ฐํžˆ๊ณ , Target model์˜ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ํ† ํฐ์˜ ์ •๋‹ต์„ฑ(correctness)์„ ํŒ์ •ํ•˜๋Š” ์ƒˆ๋กœ์šด ๊ฒ€์ฆ ๋ฐฉ์‹์ธ Judge Decoding ๋ฐฉ์‹์„ ๋„์ž…ํ•จ!

์ตœ๋ฏผ์˜
14 January 2026

Interpreting the Repeated Token Phenomenon in Large Language Models

ICML'25

๐Ÿ’กLLM์— ๊ฐ™์€ ๋‹จ์–ด๋ฅผ ๊ณ„์† ๋ฐ˜๋ณต์‹œํ‚ค๋ฉด ๋ชจ๋ธ์ด ์–ด๋А ์ˆœ๊ฐ„๋ถ€ํ„ฐ ๊ทธ ๋‹จ์–ด๋ฅผ ์ œ๋Œ€๋กœ ๋ฐ˜๋ณตํ•˜์ง€ ๋ชปํ•˜๊ณ  ๋ถ•๊ดด๋˜๋Š”๋ฐ, ์ด๋Š” attention sink๋ฅผ ๋งŒ๋“œ๋Š” neuron์ด ๋ฐ˜๋ณต๋˜๋Š” ํ† ํฐ์„ โ€˜๋ฌธ์žฅ์˜ ์ฒซ ํ† ํฐ(BoS)โ€™์œผ๋กœ ์˜ค์ธํ•˜์—ฌ attention์ด ๋ชฐ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์ž„

14 January 2026

Advancing Expert Specialization for Better MoE

NIPS'25

๐Ÿ’กMixture-of-Experts ํ›ˆ๋ จ ์†์‹คํ•จ์ˆ˜์—๋Š” expert ๊ฐ„ routing ํšจ์œจ์„ฑ ์œ„ํ•œ objective term ์žˆ์Œ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Š” ๊ฐ expert์˜ ์ „๋ฌธ์„ฑ ํŠนํ™”๋ฅผ ๋ฐฉํ•ดํ•˜๋Š” ๋ถ€์ž‘์šฉ ์žˆ์Œโ‡’ routing ํšจ์œจ์„ฑ ๋ชฉํ‘œ๋ฅผ ๋ฐฉํ•ดํ•˜์ง€ ์•Š์œผ๋ฉด์„œ expert ์ „๋ฌธํ™”์— ๋„์›€๋˜๋Š” objective๋ฅผ ์ถ”๊ฐ€ํ•˜์ž

07 January 2026

What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers

NIPS'25

๐Ÿ’กTransformer ๋ชจ๋ธ ํ›ˆ๋ จ ์‹œ ์†์‹คํ•˜๋ฝ์ด ์ดˆ๊ธฐ๋‹จ๊ณ„์—์„œ ์ •์ฒด๋˜๋‹ค๊ฐ€ ๊ฐ‘์ž๊ธฐ ํฌ๊ฒŒ ์ผ์–ด๋‚˜๋Š” abrupt learning ํ˜„์ƒ ํƒ๊ตฌ

07 January 2026

Superposition Yields Robust Neural Scaling

NIPS'25

๐Ÿ’กSuperposition์€ Scaling law๊ฐ€ ์ž‘๋™ํ•˜๊ฒŒ ํ•œ๋‹ค!

์ด์Šนํ™˜
07 January 2026

Scaling Laws for Precision

ICLR'25

๐Ÿ’ก์–ธ์–ด ๋ชจ๋ธ์˜ ํ•™์Šต ๋ฐ ์ถ”๋ก  ์‹œ ์ •๋ฐ€๋„(precision)๊ฐ€ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ๊ณผ ๋น„์šฉ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์ฒด๊ณ„์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ณ , ์ด๋ฅผ ์˜ˆ์ธกํ•  ์ˆ˜ ์žˆ๋Š” precision-aware scaling laws๋ฅผ ์ œ์‹œ

์—ผ๊ทœํ™˜
07 January 2026

Layer by Layer: Uncovering Hidden Representations in Language Models

ICML'25

๐Ÿ’กAutoregressive ๋ฐฉ์‹์œผ๋กœ ํ•™์Šตํ•˜๋Š” ์–ธ์–ด๋ชจ๋ธ์€ ์ค‘๊ฐ„ layer ํ‘œํ˜„์ด ๊ฐ€์žฅ ํ’๋ถ€ํ•˜๋‹ค!

07 January 2026

How Do Large Language Monkeys Get Their Power (Laws)?

ICML'25

๐Ÿ’กLLM์˜ ๋ฐ˜๋ณต ์ƒ˜ํ”Œ๋ง ์„ฑ๋Šฅ์ด power law์ฒ˜๋Ÿผ ๋ณด์ด๋Š” ์ด์œ ๋Š” ๋ชจ๋ธ์˜ ์ถ”๋ก  ๋Šฅ๋ ฅ ๋•Œ๋ฌธ์ด ์•„๋‹ˆ๋‹ค.๊ฐ ๋ฌธ์ œ๋Š” ์ด๋ฏธ ์ง€์ˆ˜์ ์œผ๋กœ(exponentially) ํ•ด๊ฒฐ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์†Œ์ˆ˜์˜ ๊ทน๋„๋กœ ์–ด๋ ค์šด ๋ฌธ์ œ๋“ค์ด ๋๊นŒ์ง€ ๋‚จ์•„ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ „์ฒด ํ‰๊ท  ์„ฑ๋Šฅ์ด power law์ฒ˜๋Ÿผ ๋ณด์ผ ๋ฟ์ด๋‹ค.โ‡’ power law๋Š” ๋ชจ๋ธ์˜ ๋ฒ•์น™์ด ์•„๋‹ˆ๋ผ, ๋ฌธ์ œ ๋‚œ์ด๋„ ๋ถ„ํฌ์˜ ๊ฒฐ๊ณผ๋‹ค.

07 January 2026

EvoLM: In Search of Lost Language Model Training Dynamics

NIPS'25

๐Ÿ’กLanguage Model์˜ ์„ฑ๋Šฅ์ด ์–ผ๋งˆ๋‚˜ ํฐ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ์˜ค๋ž˜ ํ•™์Šตํ–ˆ๋Š”๊ฐ€๋ณด๋‹ค ์–ด๋–ค ๋‹จ๊ณ„์—์„œ ์–ด๋–ป๊ฒŒ, ์–ธ์ œ ํ•™์Šตํ–ˆ๋Š”๊ฐ€๊ฐ€ ๋” ์ค‘์š”ํ•˜๋ฉฐ CPT(Continued Pre-Training)๊ฐ€ ์ง€๋„ ํ•™์Šต ๋ฐ ๊ฐ•ํ™” ํ•™์Šต์˜ ์„ฑ๋Šฅ์„ ๊ฒฐ์ •ํ•œ๋‹ค.

์ตœ๋ฏผ์˜
07 January 2026

Capturing the Temporal Dependence of Training Data Influence

ICLR'25

๐Ÿ’ก๋ฐ์ดํ„ฐ์˜ ๊ฐ€์น˜๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ โ€˜๋ฌด์—‡์ด๋ƒโ€™ ๋ณด๋‹ค โ€˜ํ•™์Šต ์‹œ์ ์— ์–ธ์ œ ๋“ฑ์žฅํ–ˆ๋ƒโ€™์— ์˜ํ•ด ๊ฒฐ์ •๋œ๋‹คํ•ด๋‹น ๋…ผ๋ฌธ์€ ํ•™์Šต ๊ฒฝ๋กœ(trajectory)์™€ ๋ฐ์ดํ„ฐ์˜ ๋“ฑ์žฅ ์‹œ๊ธฐ๋ฅผ ๊ณ ๋ คํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ ์˜ํ–ฅ๋ ฅ ์ •์˜ TSLOO๋ฅผ ์ œ์•ˆํ•จ

07 January 2026

AI as Humanityโ€™s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

ICLR'25

๐Ÿ’กLLM์€ ์ฐฝ์˜์„ฑ์œผ๋กœ ์‚ฌ๋žŒ์„ ๋”ฐ๋ผ์žก์„ ์ˆ˜ ์žˆ์„๊นŒ? โ‡’ ใ„ดใ„ด์•„์ง ์ฐฝ์˜์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ LLM๊ณผ ์‚ฌ๋žŒ์„ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ์„๊นŒ? โ‡’ ์›… ๊ฐ€๋Šฅ

30 December 2025

Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems

ICML'25

๐Ÿ’กLLM ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋‚ฌ์„ ๋•Œ ๋ˆ„๊ฐ€ ์–ธ์ œ ์˜ค๋ฅ˜๋ƒˆ๋Š”์ง€ ์ž๋™์œผ๋กœ ํŒŒ์•…ํ•ด๋ณด์ž!๋ฒค์น˜๋งˆํฌ ์ œ์•ˆ ๋ฐ ํ˜„ LLM ์„ฑ๋Šฅ ํ‰๊ฐ€

30 December 2025

To Mask or to Mirror: Human-AI Alignment in Collective Reasoning

EMNLP'25

๐Ÿ’กLLM์€ ์‚ฌ๋žŒ์„ ๋”ฐ๋ผํ•˜๋Š”๊ฐ€? ํ˜น์€ ์‚ฌ๋žŒ์ด ๋ณดํŽธ์ ์œผ๋กœ ๊ฐ€์ง„ ํŽธํ–ฅ(?)์„ ์—†์• ๊ณ  ์‚ฌ๋žŒ๋ณด๋‹ค ๋” ๋‚˜์€ ๊ฒฐ์ •์„ ๋‚ด๋ฆฌ๋Š”๊ฐ€? ๋ฆฌ๋” ์„ ์ถœ ์‹คํ—˜์„ ํ†ตํ•ด ๋ถ„์„ํ•œ ๊ฒฐ๊ณผ, LLM ๋ณ„๋กœ ๋‹ค๋ฅด๋‹ค. (GPT, Gemini๋Š” ์ธ๊ฐ„์„ ๊ทธ๋Œ€๋กœ ๋ชจ๋ธ๋ง , Claude๋Š” ๋” ๋‚˜์€ ์„ ํƒ)

์—ผ๊ทœํ™˜
17 December 2025

Quantifying Elicitation of Latent Capabilities in Language Models

NIPS'25

๐Ÿ’กLLM์€ ์ž ์žฌ๋œ ๋Šฅ๋ ฅ์„ ์ด๋ฏธ ๊ฐ–์ถ”๊ณ  ์žˆ์œผ๋ฉฐ, ์•„์ฃผ ์ ์€ ์ˆ˜์˜ ๋ฌด์ž‘์œ„ ํŒŒ๋ผ๋ฏธํ„ฐ๋งŒ ํ•™์Šตํ•ด๋„ ๊ทธ ๋Šฅ๋ ฅ์„ ํšจ์œจ์ ์œผ๋กœ ๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์‹คํ—˜/์ด๋ก ์ ์œผ๋กœ ์ •๋Ÿ‰ํ™”ํ•จ

17 December 2025

Mind the Gap: Bridging Thought Leap for Improved Chain-of-Thought Tuning

NIPS'25

๐Ÿ’กCoT ๊ธฐ๋ฐ˜ LLM ์ถ”๋ก ์€ ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์ถ”๋ก  ๊ณผ์ •์„ ํ•™์Šตํ•˜๋А๋ƒ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ๊ทธ ๊ณผ์ •์„ ์–ผ๋งˆ๋‚˜ ์ •ํ™•ํ•˜๊ณ  ๋ช…ํ™•ํ•˜๊ฒŒ ์•Œ๋ ค์ฃผ๋Š”์ง€๊ฐ€ ๋” ์ค‘์š”ํ•˜๋‹ค. ์ฆ‰, ๋‚ด์šฉ๋ณด๋‹ค๋Š” ๊ตฌ์กฐ์  ์™„์ „์„ฑ์— ์ดˆ์ ์„ ๋‘์–ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์‹คํ—˜์„ ํ†ตํ•ด ํ™•์ธํ•œ ์—ฐ๊ตฌ

17 December 2025

Chain-of-Model Learning for Language Model

NIPS'25

๐Ÿ’กRepresentation์„ sequancialํ•œ sub-representation์œผ๋กœ ๋‚˜๋ˆ„๋ฉด ๊ธฐ์กด ๋ชจ๋ธ์„ ์œ ์ง€ํ•œ ์ฑ„ ์ถ”๊ฐ€ ํ•™์Šต๋„ ๊ฐ€๋Šฅํ•˜๊ณ , ํ™•์žฅ๋„ ๊ฐ€๋Šฅํ•˜๊ณ  ์œ ์—ฐํ•จ!

10 December 2025

Mind the Value-Action Gap: Doย LLMs Act in Alignment with Their Values?

EMNLP'25

๐Ÿ’กLLM์ด ์ž๊ธฐ ๊ฐ€์น˜๊ด€์— ๋Œ€ํ•ด ์ง์ ‘ ์ฃผ์žฅํ•˜๋Š” ๋ฐ”์™€, ์‹ค์ œ ์ฃผ์–ด์ง„ ์ƒํ™ฉ์—์„œ ํ–‰๋™ํ•˜๋Š” ๊ฒƒ์ด ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Œ!๊ทธ๋ž˜์„œ ์ ๋‹นํžˆ ๋ฏฟ๊ณ  ์ฃผ์˜ํ•˜๋ฉด์„œ ํƒœ์Šคํฌ ๋งก๊ฒจ์•ผ ํ•จ

10 December 2025

Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers

NIPS'25

๐Ÿ’กGeneralization์ด๋“  Hallucination์ด๋“  ๋ชจ๋‘ ๋‹ค Out-of-Context Reasoning์˜ ํ˜„์ƒ์ด๊ณ , ์ด๋Š” Output ํ–‰๋ ฌ๊ณผ Value ํ–‰๋ ฌ์ด ๋ถ„๋ฆฌ๋˜์–ด์žˆ์–ด ํ•™์Šต๊ฐ€๋Šฅํ•˜๋‹ค!

์ด์Šนํ™˜
10 December 2025

Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models

ICLR'25

๐Ÿ’กLLM ์•ˆ์—๋Š” ์ด ์—”ํ‹ฐํ‹ฐ๋ฅผ LLM์ด ์•„๋Š”์ง€/๋ชจ๋ฅด๋Š”์ง€๋ฅผ ํ‘œ์‹œํ•˜๋Š” latent ๋ฐฉํ–ฅ์ด ์‹ค์ œ๋กœ ์กด์žฌ์ด latent ๋ฐฉํ–ฅ์„ ์กฐ์ž‘(steering) ํ•˜๋ฉด,์›๋ž˜๋Š” ๋ชจ๋ฅธ๋‹ค๊ณ  ๋งํ•˜๋˜ ์งˆ๋ฌธ(๋‹ต๋ณ€ ๊ฑฐ๋ถ€)์— ๋Œ€ํ•ด ํ• ๋ฃจ์‹œ๋„ค์ด์…˜์„ ์‹œํ‚ค๊ฑฐ๋‚˜,์›๋ž˜ ์ž˜ ์•Œ๋˜ ์—”ํ‹ฐํ‹ฐ์— ๋Œ€ํ•ด์„œ๋„ ๋‹ต๋ณ€์„ ๊ฑฐ๋ถ€ํ•˜๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Œ

26 November 2025

On the Role of Attention Heads in Large Language Model Safety

ICLR'25

๐Ÿ’กLLM ์•ˆ์ „์„ฑ์€ ์‚ฌ์‹ค ์†Œ์ˆ˜์˜ attention head ์— ์ง‘์ค‘๋˜์–ด ์žˆ์–ด์„œ, ๊ทธ head๋“ค๋งŒ ์‚ด์ง ๊บผ๋„ ๐Ÿšจ ์•ˆ์ •์„ฑ์ด ๋ฐ”๋กœ ๋ฌด๋„ˆ์ง„๋‹ค๋Š” ๊ฑธ ๋ฐํž˜ ๐Ÿ” ShipsยทSahara๋กœ ์–ด๋–ค head๊ฐ€ ์ง„์งœ safety ๋‹ด๋‹น์ธ์ง€ ์ฐพ์•„๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•จ โš™๏ธ๐Ÿ”ฅ

์ตœ๋ฏผ์˜
26 November 2025

Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes

NIPS'24

๐Ÿ’กJailbreak: ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์˜ ์•ˆ์ „์žฅ์น˜๋ฅผ ์šฐํšŒํ•˜์—ฌ, ์›๋ž˜ ๊ฑฐ๋ถ€ํ•ด์•ผ ํ•  ์œ„ํ—˜ํ•œ ๋‹ต๋ณ€์„ ๋Œ์–ด๋‚ด๋ ค๋Š” ๊ณต๊ฒฉ์  ํ”„๋กฌํ”„ํŠธ ์กฐ์ž‘ ๊ธฐ๋ฒ•LLM์ด jailbreak์„ ์‹œ๋„ํ•˜๋Š” prompt์— ๋…ธ์ถœ๋  ๋•Œ, ๋ชจ๋ธ์˜ loss function์„ ์‹œ๊ฐํ™”ํ•œ landscape์˜ gradient๊ฐ€ ํ”๋“ค๋ฆฐ๋‹ค๋Š” ํŠน์ง•์„ ์ด์šฉํ•˜์—ฌ jailbreak ๊ณต๊ฒฉ์„ ์ฐจ๋‹จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆ

26 November 2025

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?

NIPS'25

๐Ÿ’กRLVRํ•˜๋ฉด sampling path์—์„œ ์ •๋‹ต path๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์ž˜ ์ฐพ๊ธด ํ•˜๋Š”๋ฐ, ์›๋ž˜ ๋ชจ๋ธ์ด ๊ณ ๋ ค์•ˆํ•˜๋Š”๊ฑธ ๊ณ ๋ คํ•˜๋Š”๊ฑด ์•„๋‹˜! ๊ฒŒ๋‹ค๊ฐ€ ์ƒ˜ํ”Œ๋ง์„ ๋Š˜๋ฆฌ๋ฉด ์˜คํžˆ๋ ค reasoning scope๊ฐ€ base model๋ณด๋‹ค ์ข์Œ!my insight: ์ด๊ฒƒ๋„ ์ง€์‹์˜ ์ €์ฃผ?!

์ด์Šนํ™˜
26 November 2025

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models

ICLR'25

๐Ÿ’กLLM์ด ์–ธ๋Ÿฌ๋‹, ์ •๋ ฌ์ด ์ง„์งœ ์ž˜ ๋๋Š”์ง€ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด์„  ๊ธฐ์กด์˜ ๊ฒฐ์ •๋ก ์  ์ถœ๋ ฅ ์ฆ‰, ํ•˜๋‚˜์˜ ๋‹ต๋งŒ ํ‰๊ฐ€ํ•ด์„  ์•ˆ๋˜๊ณ , ๋ชจ๋ธ์˜ ์ „์ฒด ์ถœ๋ ฅ ๋ถ„ํฌ๋ฅผ ํ™•๋ฅ ์ ์œผ๋กœ ๋ณด๊ณ  ํ‰๊ฐ€๋ฅผ ํ•ด์•ผ ํ•จ์ด๋ฅผ ์œ„ํ•ด ์ƒˆ๋กœ์šด ๊ธฐ์กด์˜ ๊ฒฐ์ •๋ก ์ ์ธ ํ‰๊ฐ€์ง€ํ‘œ๊ฐ€ ์•„๋‹Œ ์ƒˆ๋กœ์šด ํ™•๋ฅ ๋ก ์ ์ธ ํ‰๊ฐ€ ์ง€ํ‘œ๋“ค์„ ์ œ์•ˆ