æ¬ããã°ã§ã¯ãæ¥æ¬èª LLM ã® OSS ã§ãã OpenCALM ã LoRA (Low-Rank Adaptation) ãçšãã Fine-Tuning ã«ããã¯ã€ãºåçã®ç²ŸåºŠãåäžãããã³ãŒãã SageMaker Studio Lab äžã§å®è¡ããããšã«ææŠããŸããæåã«èæ¯ã課é¡ã«ã€ããŠã説æããŸãããæ©éåãããŠã¿ããæ¹ã¯ã SageMaker Studio Lab ã§æ¥æ¬èª LLM OpenCALM ãåããæºå ãããèªã¿ããã ããšã¹ã ãŒãºã§ãã å¶éäºé
å·çæç¹ã§ SageMaker Studio Lab ã«ã¯ CPU ãš GPU ãããã 1 æ¥ãããé£ç¶ã®å©çšæéäžéãæ±ºããããŠããŸãããŸããé忥ç¶ãããªãã·ã§ã³ããªãã€ã³ã¿ãŒããããã§ãŒã·ã³ã°ãããµãŒãã¹ã«ãªããŸãã奜è©ããã ããŠãããµãŒãã¹ã®ãããç¹ã« GPU ã®å©çšæã«äžåºŠã§ã¯ Start runtime ã§ããè€æ°åãã©ã€ããã ãå ŽåããããŸãããããã®ç¹åŸŽãšäžæã«ãä»ãåãããã ããå©çšããã ããã°å¬ããã§ãã æ¥æ¬èª LLM ãåŠç¿ããŠã¿ãããã©ããã æè¿ãæ¥æ¬èª LLM (Large Language Model) ã® OSS ãè€æ°åºçŸãããã©ã€ããŒãã§ãäŒæ¥çšéã§ãæ°ã«ãªã£ãŠãã£ãããæ¹ãå€ãã®ã§ã¯ãªãã§ããããïŒçæç³» AI ã®äžçš®ã§ãã æ¥æ¬èª LLM ã䜿ãã°ã RAG (Retrieval Augmented Generation) ãçšãããšã³ã¿ãŒãã©ã€ãºãµãŒãããã£ãããããã®é«åºŠåãæ¥æ¬èªå¯Ÿå¿ãã圢ã§å®çŸã§ããŸãããšã³ã¿ãŒãã©ã€ãºãµãŒãããã£ãããããããŠãŒã¶ãå¿
èŠãšããæ
å ±ãååŸããããã®ææ®µã§ããããã®é«éåãé«ç²ŸåºŠåã¯å€ãã®ãŠãŒã¹ã±ãŒã¹ã«å¹æããããããŸãããœãªã¥ãŒã·ã§ã³ã¢ãŒããã¯ããšããŠæ¥ã
ã客æ§ãšçžå¯ŸãããŠããã ãäžã§ãçæç³» AI ã®ãååãã®äžã§ã RAG ã«é¢ããè°è«ãå€ãã®ã¯ãã®ãããªèæ¯ããã ãšèããŠããŸããã客æ§ããããããã ãã質åãããã€ãããã¯ã¢ããããŠã¿ãŸãã ã» RAG ã«çšãã LLM ã¯åŠç¿ããå¿
èŠããããïŒ RAG ã«ããã LLM ã®åŠç¿ã®èãæ¹ã«ã¯ 3 ã€ã®éžæè¢ããããŸãã åŠç¿æžã¿ã® LLM ããã®ãŸãŸå©çšããïŒååŠç¿ããªãïŒ åŠç¿æžã¿ã® LLM ã ååŠç¿ (Fine-Tuning) ããŠå©çšãã LLM ãäžããåŠç¿ããŠå©çšãã æ³šæããã ãããã®ã¯ãäžèšãããã RAG æ¹åŒãæ¡çšããããšã§ã客æ§ç¬èªã®ããŒã¿ã«åºã¥ããåçãããããšãå¯èœã§ãããšããç¹ã§ãã1, 2, 3 ã®é ã«åŠç¿ã«é¢ããå°éç¥èã®å¿
èŠæ§ãéçºãã©ã³ãã³ã°ã³ã¹ãã倧ãããªãåŸåããããŸãããã®ãããåãçµã¿ãããã 1, 2, 3 ã®é ãšèšã£ãŠè¯ãã§ãããã1 ããåãçµãã§ã課é¡ãçºçããå Žåã®å¯ŸçãšããŠã2, 3 ã«é²ãã¹ãããã§ã®æ€èšŒæ¹æ³ããäŒãããããšããããŸãã ã» RAG ã«çšãã LLM ã«æ¥æ¬èª LLM ã® OSS ã䜿ããããããã®å Žåã§ãäžèšã®èãæ¹ã¯åããïŒ åºæ¬çã«ã¯åãã§ãããšèããŠããŸããæ¥æ¬èª LLM ã® OSS ããå©çšããã ãå Žå㯠SageMaker ãçšããŠæšè«çšã® Web API ããã¹ãããã ãæ¹æ³ããããŸããã³ã¹ããããŒã¿ä¿è·ã瀟å
ããªã·ãŒãªã©ã®çç±ã«ããèªç€Ÿã§ LLM ã¢ãã«ã管çãããå Žåã«ã¯åªããéžæè¢ã®äžã€ã§ãã ã» ãããã©ã€ãšã¿ãªã®ã¢ãã«ããããOSS ã® LLM ãååŠç¿ããæ¹ã粟床ãäžããå¯èœæ§ã¯ãããïŒ å¯èœæ§ããããŸãããã¡ãã® ããã° ãã芧ãã ããã æ ªåŒäŒç€Ÿãµã€ããŒãšãŒãžã§ã³ãæ§ãã 2023 幎 5 æ 11 æ¥ã«å
¬éãããæ¥æ¬èªå€§èŠæš¡èšèªã¢ãã« ã§ãã OpenCALM ã Fine-Tuning ããå ŽåãšãChatGPT ãšãæ¯èŒããŠããŸãã ãããã®è°è«ããããšããèªç€Ÿã§ã LLM ã Fine-Tuning ããŠã¿ããããšããã話ã«çºå±ããã®ã§ãããã§ããã ãå°ããªã¹ãããããå§ããããšã«è¶ããããšã¯ãããŸãããçæç³» AI ãè²»çšå¯Ÿå¹æãããå©çšããã ãããã® GPU, Trainium , Inferentia ãæèŒãã EC2 ããæ©æ¢°åŠç¿ã¯ãŒã¯ããŒãã«å
šé¢çã«æŽ»çšããã ãã SageMaker ã LLM ã® Fine-Tuning ã«ãæçšãªãµãŒãã¹ãšã㊠AWS ãããæäŸããŠããŸããããããªããããããã AWS ã¢ã«ãŠã³ããåãå¿
èŠãããã客æ§ãããã°ãèªç€Ÿå
ã«ã¢ã«ãŠã³ãããã£ãŠãå¿
èŠãªæš©éãä»äžãããŠããªãå Žåãããã§ããããããšèšã£ãŠãGPU ãæèŒãããã·ã³ã賌å
¥ããã ããŠæ€èšŒãå§ãããšããã®ã調éã«æéãšæéãããã£ãŠããŸããŸãã SageMaker Studio Lab ãšããéžæè¢ ããã§ãAWS ã§ã¯ SageMaker Studio Lab ãæäŸããŠããŸããAWS ã¢ã«ãŠã³ãäžèŠã®ç¡æã® Notebook ãµãŒãã¹ã§ããAWS ã¢ã«ãŠã³ããšã¯å¥ã§ãããã¡ãŒã«ã¢ãã¬ã¹ãããã°ç»é²ããããšãã§ããŸããå§ãæ¹ã¯ ãã¡ã ã§ãã SageMaker Studio Lab ã¯ æ©æ¢°åŠç¿åž³ãšã飿º ããŠãããSageMaker Studio Lab ã䜿ã£ãŠããã«æ©æ¢°åŠç¿ã®ã¹ãã«ç²åŸãå§ããŠããã ãäºãã§ããŸãã ç¡æã®ããŒãããã¯ãšèããŠã以äžã®ãããªæžå¿µãæããæ¹ãããã£ãããã®ã§ã¯ãªãã§ããããïŒ GPU ã䜿ããšææã«ãªãã®ã§ã¯ãªããïŒ ããããSageMaker Studio Lab ã§ã¯ GPU ãç¡æã§ãå©çšããã ããŸã ã¹ãã¬ãŒãžãå©çšãããšææã«ãªãã®ã§ã¯ãªããïŒ ããããSageMaker Studio Lab ã§ã¯ 15 GB ã®ãã£ã¹ã¯é åããå©çšããã ããŸã ãŸããäžåºŠ Stop Runtime ããŠããŸããšæ¶ããŠããŸãé åãšã¯ãªããŸãããäžèšä»¥å€ã®ãã£ã¹ã¯é åãæŽ»çšããã ãäºãå¯èœã§ã (åŸè¿° RoLAã§éèŠãªèŠçŽ ãšãªããŸã) 䜿çšã§ããã©ã€ãã©ãªã Python ã®ããŒãžã§ã³ãåºå®ãããŠããã®ã§ã¯ãªããïŒ ããããäžèšã®éããStop Runtime ããŠãæ¶ããªãã¹ãã¬ãŒãžé åã« Conda ç°å¢ãä¿åããã ãããšã§ãç¶ç¶çã«ãå©çšå¯èœãªã客æ§ç°å¢ãæ§ç¯ããã ãäºãã§ããŸã SageMaker Studio Lab äžã§éçºãããã®ãå¥ã®éçºç°å¢ã«æã£ãŠããããšãé£ããã®ã§ã¯ãªããïŒ Git Repository ãšé£æºãå¯èœã§ã SageMaker Studio ãžã®ç§»è¡ ãå¯èœã§ã ãããã®æžå¿µã®å€§å
ã«ããã®ã¯ããäžåããã®å©çšã«ã¯è¯ãããç¶ç¶ããŠéçºããç°å¢ãšããŠã¯äžååã§ã¯ãªããïŒããšãã芳ç¹ã§ããå匷ããå§ããŠããã£ãããªããäœã£ãç°å¢ãææç©ãä»åŸã掻çšããããšããèŠæ±ã«å¯ŸããŠãSageMaker Studio Lab ã§ã¯äžèšã®ããã«æ©èœãæäŸããŠããŸããããã«ãSageMaker Studio Lab ã§ã¯äœæãã Notebook ã AWS ã®å€§ããªèšç®æ©ãªãœãŒã¹ã掻çšããŠããããžã§ãåããããã®æ©èœã§ãã Notebook Jobs ãšããæ©èœãæäŸããŠããŸãïŒãªã³ã¯ã¯ SageMaker ã®ãã®ã§ããåããã¿ã³ã SageMaker Studio Lab ã«ããããšãèããã ããïŒããã¡ãã¯AWSã¢ã«ãŠã³ããååŸããå¿
èŠããããSageMakerã®å©çšã«äŒŽãæéãçºçããç¹ã«æ³šæãå¿
èŠã§ããéèŠãªããšã¯ãSageMaker Studio Lab ãæäŸããèšç®æ©ãªãœãŒã¹ãè¶ããŠãåŠç¿ãæšè«ãå®è¡ãããå Žåã«ãéžæè¢ãæäŸãããŠããããšã§ãã SageMaker Studio Lab ã䜿ãã°ãåçŽã¬ãã«ã®æ©æ¢°åŠç¿ã®å匷ãè¶
ããŠã掻çšããã ãããã ãšã€ã¡ãŒãžãæã£ãŠããã ããã°å¬ããã§ããããã§ã¯ãæ¬é¡ã® SageMaker Studio Lab ã§ æ¥æ¬èª LLM ã Fine-Tuning ããã話ã«å
¥ã£ãŠãããŸããæ¬ããã°ã§ã¯ã ãã¡ãã®ã¯ã€ãºã®ããã° ã«å£ã£ãŠãOpenCALM ãã¯ã€ãºã«åçãã粟床ãåäžããããã« Fine-Tuning ããŠãããŸãã SageMaker Studio Lab ã§æ¥æ¬èª LLM OpenCALM ãåããæºå 以éã®å
容ã§ã¯ãSageMaker Studio Lab ãæã€æ©èœãã€ãŸãç¡æã®ç¯å²å
ã§ã®ã話ãšãªããŸããå
è¿°ã® Notebook Jobs ã¯å©çšããŸãããNoteBoo Jobs ã®å©çšã«é¢ããŠã¯å¥é Blog ãå
¬éäºå®ã§ãããŸãã以éãèªã¿é²ããŠããã ãäžã§ãSageMaker Studio Lab ã®ã¢ã«ãŠã³ããååŸããã ããŠãããšã¹ã ãŒãºã§ããååŸæ¹æ³ã¯ ãã¡ã ã§ãããã®ããã°ã®å®è£
㯠ãã¡ã ã®ããã°ãåèã«ããŠããŸããç¹ã«ãå®è£
éšåã¯ä»¥äžãåèã«ããŠããŸãã䜵ããŠãåèãã ãããŸãã https://huggingface.co/cyberagent/open-calm-7b https://github.com/aws-samples/aws-ml-jp/tree/main/tasks/generative-ai/text-to-text/fine-tuning/instruction-tuning/Transformers SageMaker Studio Lab ã«ãã°ã€ã³ããããGPU ãéžæã Start runtime ãã¯ãªãã¯ãŠãã ãããååå®è¡ã®å ŽåãMobile Number ã®ç»é²ãš SMS 確èªãå¿
èŠãªå ŽåããããŸããæ¥æ¬ã®é»è©±çªå·ã«ãŠ Mobile Number ã®ãã§ãã¯ãããŸããããªãå Žåã¯ãMobile Number ã®å
¥åæã«æ¥æ¬ +81 ãéžæããã ããé»è©±çªå·ã®å
é 0 ãé€ããŠå
¥åããŠã¿ãŠãã ããã ãã®åŸãããããã§ãªãããšããã§ãã¯ããç»é¢ãééããŠãã ãããSageMaker Studio Lab ã¯ã奜è©ããã ããŠãããGPU ã確ä¿ã§ããªãå ŽåãããããŸãããã®å Žåã¯è€æ°åã詊ããã ããã äœæ¥çšã®ãã£ã¬ã¯ããªãäœæããŸãããããã£ã¬ã¯ããªå㯠llm-lora-challenge ãšããŸããå·Šã®ã¡ãã¥ãŒã«ãŠå³ã¯ãªãã¯ãNew Folder ãéžæããllm-lora-challenge ãã£ã¬ã¯ããªãäœæããŸãã 以éãå
šãŠã®äœæ¥ã¯ãã® llm-lora-challenge ãã£ã¬ã¯ããªä»¥äžã§å®æœããŸãã å¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŠãããŸãããã2 ã€ã®ææ®µããããŸããããã§ã¯ Conda ç°å¢ãäœæãã坿¬æ§ãé«ããæé (以äžã® 1 ã€ç®ã®æ¹æ³) ãåããŸããdefault ç°å¢ãè€æ°ã®ç®çã§å
±æããŠäœ¿ããšã©ã€ãã©ãªã®ããŒãžã§ã³è¡çªãªã©ã«ããäœæ¥ãã«ãããªãããšããããŸããããããé²ã广ããããŸãã åå¥ã® Conda ç°å¢ãäœæãllm_finetuning.yml ãäœæããGUI ãã Build Conda Environment ãå®è¡ãã llm_finetuning ãšããååã® conda ç°å¢ãæ§æãã (åè) default ã® Conda ç°å¢ã«ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ãã requirements.txt ãäœæããnotebook ã®ã»ã«ãã pip install ãå®è¡ãã requirements.txt ãäœæããterminal ãç«ã¡äžããdefault ã« Conda ç°å¢ãã¹ã€ããããåŸãpip install ã terminal ããå®è¡ãã 以äžã®ããã« llm_finetuning.yml ãäœæããŸãã name: llm_finetuning dependencies: - python=3.10 - deepspeed - pip - pip: - git+https://github.com/huggingface/peft.git@207d2908650f3f4f3ba0e21d243c1b2aee66e72d - bitsandbytes==0.39.0 - accelerate==0.20.3 - transformers==4.30.1 - tokenizers==0.13.3 - pynvml==11.4.1 - protobuf==3.20.2 - scipy - optimum - appdirs - loralib - black - black[jupyter] - datasets - fire - sentencepiece - evaluate - einops - ipykernel äœæãã llm_finetuning.yml ãå³ã¯ãªãã¯ãBuild Conda Environment ãã¯ãªãã¯ã確èªç»é¢ã§ OK ãã¯ãªãã¯ããŸãã terminal ãèµ·åãããinstall ãéå§ãããŸãã 以äžã衚瀺ãããã°å®äºã§ãã å·Šäžã®ãã©ã¹ãã¿ã³ãã Launcher ãèµ·åããŠãconda ç°å¢ãäœæãããã確èªããŠã¿ãŸãããã Notebook ã®éšåã«äœæãã llm_finetuning:Python ã衚瀺ãããŠããã°æåã§ãã 以éã®å®è¡ã¯å
šãŠ llm_finetuning ç°å¢ãå©çšããŸããLauncher ç»é¢ãã llm_finetuning:Python ãã¯ãªãã¯ããããŒãããã¯ãéãããå³äžã® kernel ã®è¡šç€ºã llm_finetuning:Python ã«ãªã£ãŠããããšã確èªããŠãã ããããããªã£ãŠããªãå Žåã¯ããã¡ããã¯ãªãã¯ããäžèšç»é¢ã®ããã« Select Kernel ãã llm_finetuning:Python ãéžæããŠãã ããã æçµçãªæ§æãå
ã«ç€ºããŸãã以éã®æé ã«ãŠäœæããã ããããããŠã³ããŒãããããå®è¡ã«ãã£ãŠçæãããããããã®ãå«ãã§ããŸãã model/ Fine-Tuning ãå®è¡ãããšäœæãããã¢ãã«ãã¡ã€ã«ãä¿åããã llm_finetuning.yml æ¬ããã°ã§äœ¿çšãã llm_finetuning ã® Conda ç°å¢ãäœæããããã®ãã¡ã€ã« data/aio_02_train.jsonl ããŠã³ããŒãããã Fine-Tuning çšãã¡ã€ã« data/aio_02_train_formatted.jsonl Fine-Tuning ã«å©çšããããããã«ãã©ãŒããããããã¡ã€ã« templates/simple_qa_ja.json ããã³ãããã³ãã¬ãŒã OpenCALM_inf.ipynb æ¬ããã°ã§äœæããæšè«çšã® Notebook ãã¡ã€ã« OpenCALM_format.ipynb æ¬ããã°ã§äœæããã¯ã€ãºããŒã¿ã Fine-Tuning çšã«ãã©ãŒããããã Notebook ãã¡ã€ã« OpenCALM_finetune.ipynb æ¬ããã°ã§äœæãã Fine-Tuning çšã® Notebook ãã¡ã€ã« OpenCALM_finetuned_inf.ipynb æ¬ããã°ã§äœæãã Fine-Tuning åŸã®ã¢ãã«ãçšããŠæšè«ãã Notebook ãã¡ã€ã« SageMaker Studio Lab ã§ OpenCALM ã®æšè«ãåŒã³åºã ããã§ã¯ãFine-Tuning åã® OpenCALM ã¢ãã«ãã©ã®çšåºŠã¯ã€ãºã«åçã§ããã確èªããŠã¿ãŸããããæ°èŠã« ipynb ãã¡ã€ã« OpenCALM_inf.ipynb ãäœæãã以äžã®ãœãŒã¹ã³ãŒããå®è¡ããŠã¿ãŠãã ãããOpenCALM 㯠HuggingFace Transformers ã©ã€ãã©ãªããå©çšããäºãã§ããŸããååã¯ã¢ãã«ã®ããŠã³ããŒããèµ°ãåæéãããããŸãã2 åç®ä»¥éã¯ããŠã³ããŒãæžã¿ã®ã¢ãã«ãå©çšããããåäœãéããªããŸãã model_name ã«ä»¥äžãæå® (ãã©ã¡ãŒã¿æ°ãå°ããé ã«èšèŒ) ããããšã§ãOpenCALM ã®ãã©ã¡ãŒã¿æ°ãç°ãªãã¢ãã«ã䜿çšããäºãã§ããŸããããããå€ããŠã¿ãŠåçãã©ããªãã確èªããŠã¿ããšé¢çœããããããŸããã cyberagent/open-calm-small cyberagent/open-calm-medium cyberagent/open-calm-large cyberagent/open-calm-1b cyberagent/open-calm-3b cyberagent/open-calm-7b import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'cyberagent/open-calm-1b' model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16, cache_dir="/tmp/model_cache/") tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer("æ ç»ããŠãšã¹ãã»ãµã€ãç©èªãã«ç»å Žãã2ã€ã®å°å¹Žã°ã«ãŒããšããã°ãã·ã£ãŒã¯å£ãšäœå£?çãã¯ã", return_tensors="pt").to(model.device) with torch.no_grad(): tokens = model.generate( **inputs, max_new_tokens=64, do_sample=True, temperature=0.7, top_p=0.9, repetition_penalty=1.05, pad_token_id=tokenizer.pad_token_id, ) output = tokenizer.decode(tokens[0], skip_special_tokens=True) print(f'{model_name}:{output}') ããã以äžã®ãšã©ãŒãåºãå ŽåãSageMaker Studio Lab ã CPU ã¢ãŒãã§å®è¡ãããŠããå¯èœæ§ããããŸãããã®ããã°ã®ã³ãŒã㯠GPU ã¢ãŒãã§ã®ã¿åäœããŸããäžåºŠ ãã°ã¢ãŠãããã ããGPU ã¢ãŒãã«åãæ¿ããŠå床ã詊ããã ããã RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' LLM ã®åºåã¯ç¢ºççèŠçŽ ãããããå¿
ãããåãã«ãªããšã¯éããŸããã以äžã®çµæã¯äœãåçããããšã¯ããŠãããã®ã®ãæ£ãããªãçããè¿ã£ãŠããŠããŸãã cyberagent/open-calm-1b:æ ç»ããŠãšã¹ãã»ãµã€ãç©èªãã«ç»å Žãã2ã€ã®å°å¹Žã°ã«ãŒããšããã°ãã·ã£ãŒã¯å£ãšäœå£?çãã¯ããã³ã·ã³ã°ããããã§ãã å°å¹Žãã¡ã®åæ
ãæå¿ãæããããã¥ãŒãžã«ã«ã¯èè¥ç·å¥³åãã人æ°ã§ããçŸå¥³ãéç£ã(1992)ããçŸåšãŸã§æ°å€ãã®æ ç»ã補äœãããŠããŸãããããŠã§ã¹ããã³ã¹ã¿ãŒå¯ºé¢ã®éã®é³ãèããããŸã§ãããã¢ã¹ãã¬ãŒãããã©ãã¢ã¯ãã¥ã¢ãªãŒãããããŠä»äžç޿倧ã®ãããäœãšãªã£ãã®ã ããã§ã泚æãå¿
èŠãªã®ã¯ãããŠã³ããŒããããã¢ãã«ã®ãã¡ã€ã«ãµã€ãºã§ããã¢ãã«åãèŠãŠã¿ãŸãããã 1b, 3b, 7b ãšããã¢ãã«åã¯ããããã®ãã©ã¡ãŒã¿æ°ã®èŠæš¡ã衚ããŠããŸããb 㯠Billion = 10 åã§ããã7b 㯠70 åãã©ã¡ãŒã¿èŠæš¡ã®ã¢ãã«ã§ããããšãããããŸãããã®ã¢ãã«ãã¡ã€ã«ã¯ 10 GB ãè¶ããŠãããSageMaker Studio Lab ã§æ°žç¶å (Stop Runtime ããŠãæ®ã) é åã«ä¿åããŠããŸããšå®¹éãå§è¿«ããŠããŸããŸããä»åŸç¶ç¶çã«éçºã«å©çšããããšãèãããš Conda ç°å¢ã®ä¿åãªã©ã«å©çšããããã«ãã®é åã¯æ¥µå空ããŠããããã§ããããããã§ãcache_dir ã« /tmp/model_cache/ ãæå®ããããšã§å¯ŸçããŠããŸãã AutoModelForCausalLM.from_pretrained ã¡ãœããã cyberagent/open-calm-7b ãæå®ããŠå®è¡ããå Žåãååã«ãããæéã¯æ°åçšåºŠã§ãããStart Runtime ãã床㫠1 床ã ããããæéãšããŠã¯èš±å®¹ç¯å²ã§ã¯ãªãã§ããããã SageMaker Studio Lab ã§ OpenCALM ã LoRA ã«ãã£ãŠ Fine-Tuning ãã ããã§ã¯ OpenCALM ã®ã¯ã€ãºåç粟床ãåäžããããã«å¿
èŠãª Fine-Tuning çšã®ããŒã¿ãæºåããŸãã ãã¡ãã®ããã° ã«æ²¿ã圢ã§ãã¯ã€ãºããŒã¿ã Fine-Tuning çšã®ãã©ãŒãããã«å€æããŠä¿åããŸããæ°èŠã« OpenCALM_format.ipynb ãäœæãã以äžã®ãœãŒã¹ã³ãŒããã»ã«ã«è»¢èŒãå®è¡ããŠã¿ãŠãã ãããdata/aio_02_train_formatted.jsonl ã«ãã©ãŒãããæžã¿ã¯ã€ãºããŒã¿ãä¿åãããŸãã !wget -P data https://jaqket.s3.ap-northeast-1.amazonaws.com/data/aio_02/aio_02_train.jsonl # Convet .jsonl to .json import pandas as pd df = pd.read_json("data/aio_02_train.jsonl", orient="records", lines=True) df = df.rename(columns={"question": "instruction", "answers": "output"}) df = df[["instruction", "output"]] df["output"] = df["output"].apply(lambda x: f"{x[0]}ã") df["input"] = "" print(df.shape) df.to_json( "data/aio_02_train_formatted.jsonl", orient="records", force_ascii=False, lines=True ) df.head(2) äœæãããããŒã¿ã確èªããŠã¿ãŸããããã¯ã€ãºããŒã¿ã®ããã質å (instruciton å) ãšåç (output å) ãšãããã¢ã§ããŒã¿ãäœæãããŠããäºãããããŸãã æ¬¡ã« OpenCALM ã«ã¯ã€ãºåççšã®åŠç¿ãããæºåãããŸããããã³ãããã³ãã¬ãŒããã¡ã€ã«ãçšæããŸãããã®ãã³ãã¬ãŒã㯠OpenCALM ã«åŠç¿ããŒã¿ãå
¥åãããšãã®ãã³ãã¬ãŒãã«ãªããŸãã以äžã®äœæ¥ã宿œããŠãã ããã templates ãã£ã¬ã¯ããªãäœæ ãã¡ãã® template ãããŠã³ããŒã SageMaker Studio Lab ã® templates ãã£ã¬ã¯ããªã«é
眮 (ãã¡ã€ã«ã¯ãã©ãã°&ããããã§ããŸã) 以äžã template ã®äžèº«ã§ãã instruction ã«ç¶ãã çãã¯ã ãããã³ãããšããŠå«ããããšã§åçãä¿ããŠããŸããå®ã¯ãå
è¿°ã®æšè«ã詊ããŠã¿ããšãã«ãOpenCALM ã« çãã¯ã ãšå
¥åããããšã§åçãããããšããŠããéšåããã³ãã¬ãŒããšããŠæ¡çšããŠããŸãã {input} ã䜿çšããå Žåãšããã§ãªãå ŽåãããäºãããããŸãã {instruction} ã®éšåã¯ã¯ã€ãºã®ããŒã¿ãµã³ãã«ããšã«ç°ãªããŸããäžæ¹ã§ {input} ã¯ããã³ãããå¥é远å ãããå Žåã«å©çšã§ããŸãã {input} ã¯ãã®ããã°ã§ã¯äœ¿çšããŸããã OpenCALM_finetune.ipynb ãã¡ã€ã«ãäœæããŠãã ãããç¬èªã®ãŠãŒãã£ãªãã£ã¯ã©ã¹ Prompter ããã®ãã³ãã¬ãŒããã¡ã€ã«ãèªã¿ããã³ãã¬ãŒãã«æ²¿ã£ã圢㧠OpenCALM ã«ããã圹å²ã§ãã Prompter ãããŠã³ããŒããã³ãŒããã³ããŒ&ããŒã¹ãã§ã»ã«ã«è²Œãä»ããŠãã ããã.py ãã¡ã€ã«ãšããŠå¥éã¢ãžã¥ãŒã«ã import ãã圢ã§ãæ§ããŸããã æåŸã« OpenCALM ã Fine-Tuning ããã³ãŒããæºåããŸãã ãã¡ãã® Fine-Tuning ã®ã³ãŒã ãããŠã³ããŒããã³ãŒããã³ããŒïŒããŒã¹ãã§ã»ã«ã«è²Œãä»ããŠãã ãããã»ã«ã«è²Œãä»ããéã«ä»¥äžã¯äžèŠã®ããã³ã¡ã³ãã¢ãŠãããŸãããã ... # from utils.prompter import Prompter ... # if __name__ == "**main**": # fire.Fire(train) .py ãã¡ã€ã«ãšããŠå¥éã¢ãžã¥ãŒã« import ãã圢ã§ãæ§ããŸããã Fine-Tuning ã§ã¯åŠç¿æžã¿ã¢ãã«ã远å ã®ããŒã¿ãçšããŠååŠç¿ããããšã§ç²ŸåºŠåäžãå³ããŸãããã©ã¡ãŒã¿ãã©ã³ãã å€ãªã©ã«ããåæåããç¶æ
ããåŠç¿ããããšã«æ¯ã¹ãåŠç¿æžã¿ã¢ãã«ã®ãã©ã¡ãŒã¿ãåæå€ã«å©çšããç¶æ
ã§åŠç¿ãé²ãã Fine-Tuning ã«ã¯åŠç¿ã®ã³ã¹ãå¹çãé«ããªããã¢ãã«ã®ç²ŸåºŠãé«ããçãããããŸãããããã工倫ç¡ãã« Fine-Tuning ããŠããŸããšå
šãŠã®ãã©ã¡ãŒã¿ãæŽæ°å¯Ÿè±¡ã«ããŠããŸãã³ã¹ãå¹çãæªããªã£ãããåŠç¿æžã¿ã¢ãã«ãç²åŸããäžè¬çãªæŠå¿µã倱ã£ãŠããŸã壿»
çå¿åŽãšåŒã°ããçŸè±¡ãçºçãããªã¹ã¯ããããŸããããã§ãå°ãªãæŽæ°ãã©ã¡ãŒã¿æ°ã§ã³ã¹ãå¹çãã粟床ãåäžãããç ç©¶åéãææ³çŸ€ãæã PEFT (Parameter-Efficient Fine Tuning) ãçãŸããŸãããå®åçã«ã¯ PEFT ãå®è£
ãããã©ã€ãã©ãªãå©çšããããšã§ãã®æ©æµãåããäºãã§ããŸããPEFT 㯠hugging face library ã«å®è£
ããããŸããæ¬ããã°ã§ã¯ãã®å
LoRA ( LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS ) ã«ææŠããŸããLoRA ã¯åŠç¿æžã¿ã¢ãã«ã®ãã©ã¡ãŒã¿ã¯ãã®ãŸãŸã«ãæ°ãã«è¿œå ãããã©ã¡ãŒã¿ (Neural Network ã®éã¿) ã«å¯ŸãæŽæ°ããææ³ã§ãããã®ææ³ã¯ä»¥äžã®å©ç¹ã«ããæ³šç®ãããŠããŸãã æŽæ°å¯Ÿè±¡ãã©ã¡ãŒã¿æ°ã®åæžã«ããã¡ã¢ãªéãšåŠç¿æéãçãã§ããå Žåããã LoRA éšåã ããåãæ¿ããããšã«ãããã¿ã¹ã¯ã«ç¹åããŠååŠç¿ãã Fine-Tuned ã¢ãã«ãå¹çããæŽ»çšã§ããå Žåããã Fine-Tuning ã®ãœãŒã¹ã³ãŒãã®ãã¡ LoRA ãããã衚ããŠããéšåã«çç®ããŠèªãã§ã¿ãŸããããå®éã®ãœãŒã¹ã³ãŒã㯠toknizer ãåŠç¿éäžçµæã®ä¿åãªã©ã®å®è£
ãå«ãŸããŠããŸãããLoRA ã®ãã€ã³ãã¯ä»¥äžã®éãã§ãã AutoModelForCausalLM.from_pretrained ã«ãã LoRA ã®å
ã«ãªãåŠç¿æžã¿ã¢ãã«ãèªã¿èŸŒã LoraConfig ã«ãŠã LoRA ã®ãã€ããŒãã©ã¡ãŒã¿ ãèšå®ãã ã§ååŸãã model ãš 2. ã§èšå®ãã config ã get_peft_model ã«æž¡ãããšã«ãã LoRA çšã® model ãååŸãã ã§ååŸãã model ãªããžã§ã¯ãã transformers.Trainer ã«æž¡ã trainer ãååŸãã ã® trainer ã䜿çšãã trainer.train ãåŒã³åºã ãŸãã AutoModelForCausalLM.from_pretrained ã« cache_dir="/tmp/model_cache/" ãæå®ãããŠãããããLoRA ã«æž¡ããåŠç¿æžã¿ã¢ãã«ãã¡ã€ã«ã¯ SageMaker Studio Lab ã§ã¯æ°žç¶åãããªãé åã«ä¿åãããŸãããŸãã output_dir ã«ãã£ã¬ã¯ããªåãæå®ãããšæ°žç¶åãããé åã« LoRA ã®ãã¡ã€ã«ãä¿åãããŸããããã«ãã£ãŠ SageMaker Studio Lab ã®æ°žç¶åé åãå§è¿«ããããšãªã LoRA ãå®è¡ã§ããŸãã 以äžã§æºåã¯çµäºã§ãããããFine-Tuning ãå®è¡ããŠã¿ãŸãããïŒ ä»¥äžã®ã³ãŒããã»ã«ã«è²Œãä»ããŠå®è¡ããŠã¿ãŠãã ãããã¡ã¢ãªäžè¶³ã«ãªãå Žåã¯ä»ã® Notebook ã忢ããŠéããŠãããŸãããã1b ã®ã¢ãã«ãåŠç¿ããå Žå㯠10 â 20 åã7b ã®ã¢ãã«ãåŠç¿ããå Žå㯠2 æéçšåºŠãèŠããŸãããŸãã¯åäœãããŠã¿ãããšããå Žå㯠model_name ã cyberagent/open-calm-1b ãªã©ã®ãã©ã¡ãŒã¿æ°ã®å°ããã¢ãã«ã§è©ŠããŠã¿ãŠãã ãããæ¬ããã°ã§ã¯ 1b ã®ã¢ãã«ã詊ããŠã¿ãŸãã model_name = "cyberagent/open-calm-1b" model_name_base = model_name.split("/")[-1] hyperparameters = { "base_model": model_name, "pad_token_id": 1, "data_path": "data/aio_02_train_formatted.jsonl", "num_epochs": 1, # default 3 "cutoff_len": 256, "group_by_length": False, "output_dir": "model", "lora_target_modules": ['query_key_value'], "lora_r": 16, "batch_size": 32, "micro_batch_size": 4, "prompt_template_name": "simple_qa_ja", } train(**hyperparameters) model_name ã« OpenCALM ãæå®ããŠããŸãããŸãã以äžã倧ããã»ã©åŠç¿ã«æéãèŠããŸããç²ŸåºŠãæ¹åããå¯èœæ§ããããŸãã倧ãããç¶ããã°å¿
ãç²ŸåºŠãæ¹åããããã§ã¯ãªãããæ³šæãå¿
èŠã§ãã num_epochs : åŠç¿ãµã³ãã«ã 1 å·¡ããåæ°ã衚ããã€ããŒãã©ã¡ãŒã¿ lora_r : è¡åã©ã³ã¯ãšåŒã°ããLoRA ã«ãп޿°å¯Ÿè±¡ã«ãããã©ã¡ãŒã¿æ°ã«äŸåãããã€ããŒãã©ã¡ãŒã¿ åãã¯ããããå°ããããŠããã©ã¡ãŒã¿æ°ã®å°ããã¢ãã«ã§ LoRA ãå®è¡ããæ£åžžã«åäœãããã確èªããŠã¿ããšè¯ãã§ãããã 以äžã®ãã㪠Epoch ã«å¯Ÿãã鲿ã®ãã°ãåºåãããã°åŠç¿ãå®è¡ãããŠããŸããæå®ãã num_epochs ã«ãã°ã® Epoch ãå°éãããåŠç¿ã¯å®äºã§ãã Training Alpaca-LoRA model with params: base_model: cyberagent/open-calm-1b data_path: data/aio_02_train_formatted.jsonl output_dir: model batch_size: 32 ã»ã»ã» [XXXXX/XXXXXX X:XX:XX < 00:00, X.XX it/s, Epoch 1.00/1] ãããåŠç¿æéãé·ããSageMaker Studio Lab ã§å®è¡ã§ãã GPU å©çšæéå¶éãè¶ããå Žåã§ããåŠç¿éäžã® LoRA ãã¡ã€ã«ã model ãã£ã¬ã¯ããªã«ä¿åãããŠãããããåŸã§åŠç¿ãã LoRA ãã¡ã€ã«ãçšããŠåŠç¿ãåå®è¡ããããæšè«ã«å©çšãããããäºãå¯èœã§ãã SageMaker Studio Lab ã§ Fine-Tuning ãã OpenCALM ã䜿ã£ãŠæšè«ãã ãããŸã§ã®äœæ¥ã«ãããLoRA ã«ãã Fine-Tuning ãã OpenCALM ã¢ãã«ãã¯ã€ãºã«äžæã«åçã§ããäºãæåŸ
ãããŸããæ©éã广ã確èªããŠãããŸããããæ°èŠã« OpenCALM_finetuned_inf.ipynb ãäœæããŸãããã æšè«ã®ãµã³ãã«ã³ãŒã ãåèã«å®è£
ããŸãããã®ãµã³ãã«ã³ãŒã㯠SageMaker Inference Endpoint ã«ãã¹ãããçšã®ã³ãŒãã«ãªã£ãŠããŸãããã®ããã°ã§ã¯ SageMaker Inference Endpoint ã¯äœ¿çšããŸããã®ã§ã以äžã®æé ã«åŸã£ãŠå¿
èŠãªã³ãŒãã ããã»ã«ã«è²Œãä»ããŠãããŸãã OpenCALM_finetuned_inf.ipynb ãã¡ã€ã«ãäœæããŠãã ãããLoRA ã«ãã Fine-Tuning æã«å©çšãã Prompter ã¯ã©ã¹ãå¿
èŠã§ããåæ§ã«ã Prompter ãããŠã³ããŒããã»ã«ã«è²Œãä»ããŠãã ããã.py ãã¡ã€ã«ãšããŠå¥éã¢ãžã¥ãŒã« import ãã圢ã§ãæ§ããŸããã æšè«ã®ãµã³ãã«ã³ãŒã ãã import ã«é¢ããéšåãæãåºãã»ã«ã«è²Œãä»ããŸãã from utils.prompter import Prompter 㯠Prompter ãã»ã«ã«è²Œãä»ããŠããå Žåã¯äžèŠã§ãã import os import sys import json from typing import Dict import torch import transformers from peft import PeftModel from transformers import GenerationConfig, AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList, BitsAndBytesConfig import deepspeed StopOnTokens ã¯ã©ã¹ãã»ã«ã«è²Œãä»ããŸããåŸã»ã©ããã¹ããçæããéã«çæã®åæ¢æ¡ä»¶ãäžããã¯ã©ã¹ã§ãã class StopOnTokens(StoppingCriteria): def __init__(self, stop_ids): self.stop_ids = stop_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: for stop_id in self.stop_ids: if input_ids[0][-1] == stop_id: return True return False prompter, tokenizer, model ãçæããŸãã以äžã®ãœãŒã¹ã³ãŒããã»ã«ã«è²Œãä»ããŠãã ããã base_model = "cyberagent/open-calm-1b" device = "cuda" prompt_template = "simple_qa_ja" lora_weights = "model" prompter = Prompter(prompt_template) tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForCausalLM.from_pretrained( base_model, load_in_8bit=False, torch_dtype=torch.float16, device_map="auto", cache_dir="/tmp/model_cache/" ) print("Loading Lora Weight") model = PeftModel.from_pretrained( model, lora_weights, torch_dtype=torch.float16, ) model.model_parallel = False if torch.cuda.device_count() > 1: model.is_parallelizable = True model.model_parallel = True æåŸã«ãprompt ãçæããããã¹ããçæãããã©ã¡ãŒã¿ãèšå®ããããããã¹ãçæããã³ãŒããã»ã«ã«è²Œãä»ããŠãã ããã instruction = "æ ç»ããŠãšã¹ãã»ãµã€ãç©èªãã«ç»å Žãã2ã€ã®å°å¹Žã°ã«ãŒããšããã°ãã·ã£ãŒã¯å£ãšäœå£?" input = "" max_new_tokens = 32 stop_ids = [1, 0] prompt = prompter.generate_prompt(instruction, input) inputs = tokenizer( prompt, add_special_tokens=False, return_token_type_ids=False, return_tensors="pt" ).to(device) generation_config = GenerationConfig( max_new_tokens=max_new_tokens, return_dict_in_generate=True, output_scores=True, temperature=0.1, do_sample=False, num_beams=5, pad_token_id=1, bos_token_id=0, eos_token_id=0 ) with torch.no_grad(): generation_output = model.generate( **inputs, generation_config=generation_config, stopping_criteria=StoppingCriteriaList([StopOnTokens(stop_ids)]), ) s = generation_output.sequences[0, inputs['input_ids'].size(1):] output = tokenizer.decode(s, skip_special_tokens=True) output æšè«ã®ãœãŒã¹ã³ãŒãã®ãã¡ LoRA ãããã衚ããŠããéšåã«çç®ããŠèªãã§ã¿ãŸããããå®éã®ãœãŒã¹ã³ãŒã㯠toknizer ãªã©ã®å®è£
ãå«ãŸããŠããŸãããLoRA ã®ãã€ã³ãã¯ä»¥äžã®éãã§ãã AutoModelForCausalLM.from_pretrained ã«ãã LoRA ã®å
ã«ãªãåŠç¿æžã¿ã¢ãã«ãèªã¿èŸŒã ã® model ãš lora_weights = âmodelâ ã PeftModel.from_pretrained ã«æž¡ã model ãååŸãã ã® model ã«ãã model.generate ãåŒã³åºã LoRA ã®ä¿åå
ã model ãã£ã¬ã¯ããªã§ãã£ãããšãæãåºããŠã¿ãŸããããFine-Tuning æãšåæ§ã«ã AutoModelForCausalLM.from_pretrained ã« cache_dir="/tmp/model_cache/" ãæå®ãããŠãããããSageMaker Studio Lab ã®æ°žç¶åé åãå§è¿«ããããšãªã LoRA ãçšããã¢ãã«ã®æšè«ãå®è¡ã§ããŸãã ãããOpenCALM_finetuned_inf.ipynb ãå®è¡ããŠã¿ãŸããããæ€èšŒæã®çµæã§ã¯ã以äžã®ããã«æ£ããåçãè¿ã£ãŠããŸãããLLM ã®åºåã¯ç¢ºçèŠçŽ ãããããæ¬ããã°ãšåãåçãåŸãããªãå Žåãããããšã«æ³šæãå¿
èŠã§ãã ããã§ãè峿·±ãäºã¯ Fine-Tuning çšãã¡ã€ã«ã«ã¯é¡äŒŒã®ã¯ã€ãºã¯å«ãŸããŠããªãã£ãäºã§ããFine-Tuning ã«çšããã¯ã€ãºããŒã¿ã確èªãããšããããŠã§ã¹ããµã€ãã¹ããŒãªãŒã«é¢ããã¯ã€ãºã¯ä»¥äžã®ã¿ã§ããã {"instruction":"ãã¥ãŒãžã«ã«ããŠãšã¹ããµã€ãã»ã¹ããŒãªãŒãã®äœæ²ã§ç¥ããã鳿¥œå®¶ã¯èª°ã§ããã?","output":"ã¬ããŒãã»ããŒã³ã¹ã¿ã€ã³ã","input":""} å
ãšãªã OpenCALM ã®åŠç¿ããŒã¿ã«ã¯ãŠã§ã¹ããµã€ãã¹ããŒãªãŒã«é¢ããããã¹ããå«ãŸããŠãããã¯ã€ãºåçã«ç¹åãã Fine-Tuning ã«ãã£ãŠæ£ããåçãåŸãããããã«ãªã£ãå¯èœæ§ããããŸãã çºå± æ¬ããã°ãåèã«ä»¥äžã«ãã£ã¬ã³ãžããŠã¿ãŸãããã cyberagent/open-calm-7b ãè©Šã æ¬ããã°ã®ã³ãŒã㯠1b ã®ã¢ãã«ã詊ããŸããããäœè
ã®åäœç¢ºèªã§ã¯ 7b ã®ã¢ãã«ãŸã§å®è¡ã§ããããšã確èªããŠããŸããã¢ãã«ã®å€§ããã«ãã£ãŠåç粟床ãã©ãå€ãã£ãŠããã®ããåŠçæéã¯ã©ãããæ¯éã詊ããã ããã SageMaker Studio ãžã®ç§»è¡ SageMaker Studio Lab ã§äœæããã³ãŒãã SageMaker Studio Notebook ã§å®è¡ããŠã¿ãŸããããããå€ãã®èšç®æ©ãªãœãŒã¹ãæ©æ¢°åŠç¿ã¯ãŒã¯ããŒããå®çŸããå€ãã®æ©èœãšé£æºã§ããããã«ãªããŸãã OpenCALM 以å€ã®æ¥æ¬èª LLM ã詊ã Hugging Face ã«å®è£
ãããŠããã¢ãã«ã§ããã°æ¬ããã°ã®å®è£
ãåèã«ããããšãã§ããŸããäŸãã°ã rinna/japanese-gpt-neox-3.6b · Hugging Face ãå©çšã§ããããã«æ¹ä¿®ããŠã¿ãŸããããrinna ã¢ãã«ã®å Žåã¯ã©ã®ãããªããã³ãããäžãããšè¯ãã詊ããŠã¿ãŸãããããããããããOpenCALM ãšã¯éãç¹æ§ããããããããŸããã æåŸã« æ¬ããã°ã§è§£èª¬ããå
容㯠SageMaker Studio Notebook, SageMaker Notebook Instance ã§ãåæ§ã«å®è¡å¯èœã§ãããããçæ§ãæ¢ã« GPU ãæèŒããèšç®æ©ç°å¢ããæã¡ã§ããã°ãJupyter notebook ã Jupyter Lab ãå°å
¥ããããšã§åæ§ã«å®è¡å¯èœã§ããçæç³» AI ã¯ããæ°å¹Žã§å®åã¬ãã«ã§æŽ»çšã§ãããŠãŒã¹ã±ãŒã¹ãå€å²ã«æž¡ãã»ã©ã®é²åãéããŸããããããããŠã³ã§æ€èšŒããããšã«ãªã£ããæ
åœè
æ§ãããã ã¢ããã§é¢çœãæè¡èŠçŽ ãšããŠã¹ãã«ãç²åŸããããšããŠããæ¹ãŸã§æ§ã
ããšæããŸããæ°ããæè¡ãæ€èšŒãããšãã瀟äŒèª²é¡ã瀟å
課é¡ãšãã«ã©ããªèª²é¡ããã®æè¡ã§è§£æ±ºã§ããããæºäžã§èããäºã¯éèŠã§ãããŸããçæ§ã®æ±ãã課é¡ã®ãã¡åªå
é äœã®é«ãè²»çšå¯Ÿå¹æã®é«ãããŒããéžæããããšãéèŠã§ããåæã«ãäºç®ãäœå¶ãå°ããªç¶æ³ã§ãåããã®ãäœãããã®æè¡ãéããŠäœãã©ãåãã®ãäœéšããããšãéèŠã ãšèããŠããŸãããã®ããã°ãéããŠãæ¥æ¬èª LLM ã® OSS ãã©ã®ããã«åäœããã®ããäœã«äœ¿ããããªã®ããäœéšããã ããæ¥æ¬èª LLM ã® OSS ãæã€å¯èœæ§ãæããŠããã ããã°å€§å€å¬ããæããŸãã èè
äžå³¶ äœæš¹ è¥¿æ¥æ¬ã®ã客æ§ãã¡ã€ã³ã§æ
åœãããœãªã¥ãŒã·ã§ã³ã¢ãŒããã¯ãã瀟äŒäººå士ãä¿®äºããããšããã£ããã« AIML ãåŸæåéãšããŠããã ã·ã¹ãã äžè¬ã®ããŒãã Amazon Bedrock ãçšããçæç³» AI ã®ã·ã¹ãã éçºãAmazon SageMaker Studio Lab ãçšãã AIML ãžã®å
¥éãŸã§å¹
åºã掻åã