ã¯ããã« ã»ãŒãã£ãŒæ ªåŒäŒç€Ÿ ã§ç»åèªèAIã®éçºãšã³ãžãã¢ãããŠããä»éã§ãã ä»åã¯ãææ°ã®ç©äœæ€åºã¢ã«ãŽãªãºã ã§ããYOLOv8ãæŽ»çšããŠãç¹å®ãšãªã¢ãééããè»äž¡ãèªåçã«ã«ãŠã³ãããã·ã¹ãã ã®å®è£
æ¹æ³ãã玹ä»ããŸãããã®ã·ã¹ãã ã¯ã亀éé調æ»ãé§è»å Žã®å©çšç¶æ³åæãªã©ãæ§ã
ãªå Žé¢ã§å¿çšå¯èœãªæè¡ã§ãã æ¬èšäºã§ã¯ãYOLOv8ã«ããç©äœæ€åºãããæ€åºçµæã®åŸåŠçããããŠå®éã®è»äž¡ã«ãŠã³ããŸã§ã®äžé£ã®ããã»ã¹ããå
·äœçãªã³ãŒãäŸã亀ããŠè§£èª¬ããŠãããŸããAIãæŽ»çšããå®çšçãªãœãªã¥ãŒã·ã§ã³ã®æ§ç¯ã«èå³ã®ããæ¹ã
ã«ãšã£ãŠãæçãªæ
å ±ãšãªãã°å¹žãã§ãã ã¯ããã« ããããããš å®æœæé ç°å¢æ§ç¯ YOLOv8 ã«ããç©äœæ€åºãšè¿œè·¡ ã³ãã³ãã®ååŒæ°ã®èª¬æ ä¿åããããã¡ã€ã« æ€åºçµæã®åŸåŠç è»äž¡ã«ãŠã³ãã®å®æœ ã¹ã¯ãªããã®äœ¿ç𿹿³ èª²é¡ ãŸãšã æåŸã« ããããããš æ¬ãããžã§ã¯ãã§ã¯ã以äžã®æ©èœãå®çŸããããšãç®æããŸãïŒ ã«ã¡ã©æ åã䜿ã£ãéè¡éã«ãŠã³ãïŒéè·¯ãééããè»äž¡ãæ€åºããã«ãŠã³ãããŸãã è€æ°çš®é¡ã®å¯Ÿè±¡ç©ã®èå¥ïŒèªåè»ããã©ãã¯ããã¹ãèªè»¢è»ãªã©ãç°ãªãçš®é¡ã®éè¡ç©ãåºå¥ããŠã«ãŠã³ãããŸãã ç¹å®ãšãªã¢ã§ã®ã«ãŠã³ãïŒæ åå
ã®ç¹å®ã®ç¯å²ïŒäŸïŒäº€å·®ç¹ãæšªææ©éïŒãééãã察象ç©ã ããã«ãŠã³ãããŸãã 宿œæé YOLOv8ã䜿çšããéè¡éã«ãŠã³ãã·ã¹ãã ã®å®è£
ã¯ã以äžã®æé ã§é²ããŠãããŸãïŒ ç°å¢æ§ç¯ YOLOv8ã«ããç©äœæ€åºãšè¿œè·¡ æ€åºçµæã®åŸåŠç è»äž¡ã«ãŠã³ãã®å®è£
çµæã®åºå ãããã®æé ãéããŠãYOLOv8ã䜿çšããåºæ¬çãªéè¡éã«ãŠã³ãã·ã¹ãã ãæ§ç¯ããŠãããŸããåã¹ãããã®è©³çްã¯ã以éã®ã»ã¯ã·ã§ã³ã§å
·äœçã«è§£èª¬ããŠãããŸãã ç°å¢æ§ç¯ ãŸããå¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŸãã以äžã®ã³ãã³ããå®è¡ããŠãã ããã pip install ultralytics opencv-python numpy matplotlib shapely YOLOv8 ã«ããç©äœæ€åºãšè¿œè·¡ ç°å¢æ§ç¯åŸãã³ãã³ãã©ã€ã³ããæ€åºã»è¿œå°Ÿãå®è¡ããŸã yolo task=detect mode=track model=yolov8x source=/*察象ã®åç»*/ save_txt save_conf save=True project=/*ä¿åå
ãã£ã¬ã¯ããª*/ãclasses=2,3,4,6,8 äžèšã®ã³ãã³ããå®è¡ãããšãæå®ããåç»ãã¡ã€ã«ã«å¯ŸããŠç©äœæ€åºãšè¿œè·¡ãè¡ãããçµæãæå®ãããã£ã¬ã¯ããªã«ä¿åãããŸãã ã³ãã³ãã®ååŒæ°ã®èª¬æ task : å®è¡ããã¿ã¹ã¯ãæå®ããŸããããã§ã¯ç©äœæ€åºãšè¿œè·¡ãè¡ãããã« track ãæå®ããŠããŸãã detect ãæå®ãããšãç©äœæ€åºã®ã¿ãå®è¡ãããŸãã model : 䜿çšããYOLOv8ã¢ãã«ãæå®ããŸãã yolov8x ã¯ãYOLOv8ã®å€§ãããµã€ãºã®ã¢ãã«ãæããŸããå¿
èŠã«å¿ããŠã yolov8s ã yolov8l ãªã©ä»ã®ãµã€ãºã®ã¢ãã«ãæå®ããããšãå¯èœã§ãã source : æšè«ãè¡ããœãŒã¹ãæå®ããŸãã save_txt : ãã®ãªãã·ã§ã³ãæå®ãããšãæ€åºããã³è¿œè·¡çµæãããã¹ããã¡ã€ã«ãšããŠä¿åãããŸããåãã¬ãŒã ããšã«ãªããžã§ã¯ãã®ã¯ã©ã¹IDãããŠã³ãã£ã³ã°ããã¯ã¹ã®åº§æšãªã©ãèšé²ãããŸãã save_conf : ãã®ãªãã·ã§ã³ãæå®ãããšã忀åºçµæã®ä¿¡é ŒåºŠã¹ã³ã¢ãããã¹ããã¡ã€ã«ã«ä¿åãããŸããããã«ãããã©ã®çšåºŠã®ä¿¡é ŒåºŠã§ç©äœãæ€åºããããã確èªã§ããŸãã save : Trueã«ããããšã§åç»ãã¡ã€ã«ãæå®ãããã£ã¬ã¯ããªã«ä¿åãããŸãããã®åŸã®è»äž¡ã«ãŠã³ãã«åç»ãã¡ã€ã«ã¯äžèŠãªã®ã§ãsaveã¯Falseã«ããŠåŠçãé«éåããããšãå¯èœã§ãã project : çµæãä¿åããã«ãŒããã£ã¬ã¯ããªãæå®ããŸããããã§æå®ããããã£ã¬ã¯ããªã®äžã«ãçµæãä¿åãããŸããæå®ããªãå Žåãããã©ã«ãã§ runs ãã£ã¬ã¯ããªã«ä¿åãããŸãã lasses : ãã®ãªãã·ã§ã³ã¯ãæ€åºããã¯ã©ã¹ãç¹å®ã®IDã«çµãããã«äœ¿çšããŸããæå®ããã¯ã©ã¹IDã«å¯Ÿå¿ããç©äœã®ã¿ãæ€åºãããŸããä»å䜿çšããã¢ãã«ã¯COCO datasetã®ã«ããŽãªidã«æºæ ããŠãããããâ2,3,4,6,8âãæå®ãããšèªè»¢è»ãèªåè»ããªãŒããã€ããã¹ããã©ãã¯ã®ã¿ãæ€åºãããããã«ãªããŸãã ä¿åããããã¡ã€ã« åç»ãã¡ã€ã« : æ€åºçµæãå
¥ååç»ã«å¯ŸããŠéç³ããããããªãã¡ã€ã«ãä¿åãããŸãã project ãªãã·ã§ã³ã§æå®ãããã£ã¬ã¯ããªã®äžã«ãæ€åºçµæãä¿åããããã©ã«ããäœæããããã®äžã«åç»ãã¡ã€ã«ãä¿åãããŸãã ããã¹ããã¡ã€ã« : æ€åºãããåãã¬ãŒã ã®ãªããžã§ã¯ãæ
å ±ãä¿åãããããã¹ããã¡ã€ã«ãçæãããŸããåãã¬ãŒã ã«å¯Ÿå¿ããããã¹ããã¡ã€ã«ãä¿åãããã¯ã©ã¹IDãããŠã³ãã£ã³ã°ããã¯ã¹ã®åº§æšãªã©ã®æ
å ±ãå«ãŸããŠããŸãã åºåããã¹ããã¡ã€ã«äŸ: 7 0.467559 0.775986 0.196506 0.249537 0.94994 1 7 0.302576 0.186449 0.136675 0.144221 0.91664 2 7 0.70413 0.417058 0.118785 0.119517 0.897981 3 7 0.717557 0.0861716 0.0540201 0.106011 0.838547 4 2 0.0168792 0.141835 0.0337098 0.0552144 0.810139 5 2 0.0770008 0.229309 0.068876 0.0676976 0.728882 6 2 0.532953 0.29052 0.0817046 0.088725 0.653665 7 2 0.194599 0.215173 0.0581096 0.0713775 0.633438 8 2 0.322347 0.272015 0.0641675 0.0660537 0.620266 9 æ€åºçµæã®åŸåŠç 次ã«ãYOLOv8ã§åŸããããã©ããã³ã°çµæãè»äž¡ã«ãŠã³ãã§äœ¿ãããã圢ãžå€æããåŠçãè¡ããŸã import argparse import glob import os import re import cv2 import matplotlib.pyplot as plt import numpy as np import pandas as pd from tqdm import tqdm from ultralytics.utils import yaml_load from ultralytics.utils.checks import check_yaml CLASSES = yaml_load(check_yaml( "coco128.yaml" ))[ "names" ] def parse_args () -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument( "--input_video" , type = str , required= True , help = "Path to the input video file" ) parser.add_argument( "--input_dir" , type = str , required= True , help = "Directory containing the input label files" ) parser.add_argument( "--output_dir" , type = str , default= "out" , help = "Directory to save the output files" ) parser.add_argument( "--output_labels" , type = str , default= "output_video_results.csv" , help = "Name of the output CSV file" ) parser.add_argument( "--vid_stride" , type = int , default= 1 , help = "Video stride for processing" ) args = parser.parse_args() return args def get_video_resolution (video_path: str ) -> tuple [ int , int ]: """æå®ãããåç»ã®è§£å床ãååŸãã Args: video_path (str): åç»ã®ãã¡ã€ã«ãã¹ Returns: tuple[int, int]: è§£å床(width, height) """ cap = cv2.VideoCapture(video_path) try : width = int (cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int (cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) except Exception as e: print (f "An error occurred: {e}" ) finally : cap.release() return width, height def get_frame_num_from_label_file (label_file: str ) -> int : """001.txtã®ãããªå
¥åãã0åããªãã®æ°åãååŸãã Args: label_file (str): å
é ã«çªå·ãä»ãããã¡ã€ã«å Returns: int: çªå· """ match = re.search( r"(\d+)\.txt" , label_file) assert match is not None return int (match.group( 1 )) def merge_labels (input_dir: str , input_video_path: str , output_dir: str , output_labels: str ) -> None : """txtãã¡ã€ã«ãçµåããcsvãã¡ã€ã«ãåºåãã Args: input_dir (str): å
¥åtxtãã¡ã€ã«ãæ ŒçŽãããŠãããã£ã¬ã¯ã㪠input_video_path (str): å
¥ååç»ã®ãã¡ã€ã«ãã¹ output_dir (str): åºåãã£ã¬ã¯ã㪠output_labels (str): åºåcsvãã¡ã€ã«å """ # labelsãã¡ã€ã«ã®äžèЧãååŸ label_files = glob.glob(os.path.join(input_dir, "*.txt" )) # ãã¡ã€ã«åã®ãã¬ãŒã çªå·éšå(0åããªãã®æ°å)ã§æé ãœãŒã label_files.sort(key= lambda x: get_frame_num_from_label_file(x)) # å
¥ååç»ããè§£å床æ
å ±ãååŸ width, height = get_video_resolution(input_video_path) # txtãã¡ã€ã«ãèªã¿èŸŒã¿ãcsvãã¡ã€ã«ãåºå df_list = [] column_names = [ "class_id" , "center_x" , "center_y" , "width" , "height" , "confidence" , "tracking_id" ] for label_file in tqdm(label_files, desc= "Loading labels" ): # ãã¡ã€ã«åãããã¬ãŒã çªå·ãååŸ frame_num = get_frame_num_from_label_file(label_file) # ãã¡ã€ã«ãèªã¿èŸŒã¿ df = pd.read_csv(label_file, sep= " " , header= None ) # ã«ã©ã åãèšå® df.columns = column_names # æ£èŠå座æšããã¯ã»ã«åº§æšã«å€æ df[ "center_x" ] = (df[ "center_x" ] * width).astype( int ) df[ "center_y" ] = (df[ "center_y" ] * height).astype( int ) df[ "width" ] = (df[ "width" ] * width).astype( int ) df[ "height" ] = (df[ "height" ] * height).astype( int ) # ãã¬ãŒã çªå·ã远å df[ "frame_num" ] = frame_num # ã¯ã©ã¹ã©ãã«ã远å df[ "class_label" ] = df[ "class_id" ].apply( lambda x: CLASSES[x]) # ã«ã©ã ãäžŠã¹æ¿ã df = df[ [ "frame_num" , "class_label" , "class_id" , "center_x" , "center_y" , "width" , "height" , "confidence" , "tracking_id" ] ] df_list.append(df) # ããŒã¿ãã¬ãŒã ãåºå concat_df = pd.concat(df_list) concat_df.to_csv(os.path.join(output_dir, output_labels), index= False ) def aggregate_labels (output_dir: str , output_labels: str ) -> None : """ã©ãã«ãéçŽãã Args: output_dir (str): åºåãã£ã¬ã¯ã㪠output_labels (str): åºåcsvãã¡ã€ã«å """ # çµåããã©ãã«ãã¡ã€ã«ãèªã¿èŸŒã¿ concat_df = pd.read_csv(os.path.join(output_dir, output_labels)) # tracking_idããšã«class_idã®åºçŸé »åºŠãã«ãŠã³ã class_id_counts: dict [ int , dict [ int , int ]] = {} for _, row in concat_df.iterrows(): if row[ "tracking_id" ] not in class_id_counts: class_id_counts[row[ "tracking_id" ]] = {} if row[ "class_id" ] not in class_id_counts[row[ "tracking_id" ]]: class_id_counts[row[ "tracking_id" ]][row[ "class_id" ]] = 0 class_id_counts[row[ "tracking_id" ]][row[ "class_id" ]] += 1 # tracking_idããšã«class_idã®å€æ°æŽŸãç®åº class_id_map: dict [ int , int ] = {} for track_id, class_id_count in class_id_counts.items(): class_id_map[track_id] = max (class_id_count, key=class_id_count.__getitem__) # 倿°æŽŸã®ã¯ã©ã¹ã§äžæžã for track_id, class_id in class_id_map.items(): concat_df.loc[concat_df[ "tracking_id" ] == track_id, "class_id" ] = class_id concat_df.loc[concat_df[ "tracking_id" ] == track_id, "class_label" ] = CLASSES[class_id] concat_df.to_csv(os.path.join(output_dir, output_labels), index= False ) def main (): args = parse_args() # ãã£ã¬ã¯ããªã®ååšç¢ºèª if not os.path.exists(args.input_dir): print ( "input_dir not found" ) return if not os.path.exists(args.output_dir): os.makedirs(args.output_dir, exist_ok= True ) # ãã¡ã€ã«ã®ååšç¢ºèª if not os.path.exists(args.input_video): print ( "input_video not found" ) return # ã©ãã«ãã¡ã€ã«ãçµå merge_labels( input_dir=args.input_dir, input_video_path=args.input_video, output_dir=args.output_dir, output_labels=args.output_labels, ) # ã©ãã«éçŽ aggregate_labels(args.output_dir, args.output_labels) if __name__ == "__main__" : main() ãã®ã¹ã¯ãªããã¯ãYOLOv8ãçæããã©ãã«ãã¡ã€ã«ãèªã¿èŸŒã¿ãåãã©ããã³ã°IDããšã«æ€åºãããã¯ã©ã¹æ
å ±ãéçŽããŠãæãé »ç¹ã«åºçŸããã¯ã©ã¹ããã®ãã©ããã³ã°IDã«é¢é£ä»ããŸããããã«ãããè»äž¡ã®çš®é¡ãå®å®ããŠèå¥ããè»äž¡ã«ãŠã³ãã«é©ããããŒã¿åœ¢åŒã«å€æããŸãã ã¹ã¯ãªããã®äž»ãªæ©èœ YOLOv8ã®æšè«çµæããçæãããã©ãã«ãã¡ã€ã«ãäžã€ã®CSVãã¡ã€ã«ã«çµå æ£èŠåããã座æšïŒ0ïœ1ã®ç¯å²ïŒããã¯ã»ã«åäœã®åº§æšã«å€æ åãã©ããã³ã°IDã«å¯ŸããŠãæãé »ç¹ã«åºçŸããã¯ã©ã¹ããã®ãã©ããã³ã°IDã®æçµã¯ã©ã¹ãšããŠæå®ãã ãã¹ãŠã®æ
å ±ãäžã€ã®CSVãã¡ã€ã«ã«ãŸãšããè»äž¡ã«ãŠã³ãã«é©ãã圢åŒã§ä¿åãã äžèšã®ã¹ã¯ãªãããPythonãã¡ã€ã«ãšããŠä¿åããã³ãã³ãã©ã€ã³ããå®è¡ããŸãã åºåCSVãã¡ã€ã«äŸ: è»äž¡ã«ãŠã³ãã®å®æœ 次ã«è»äž¡ã«ãŠã³ãã宿œããŸã import pandas as pd from shapely.geometry import Point, Polygon import argparse def load_data (csv_file: str ) -> pd.DataFrame: """CSVãã¡ã€ã«ãèªã¿èŸŒã¿ãããŒã¿ãã¬ãŒã ãè¿ã Args: csv_file (str): CSVãã¡ã€ã«ã®ãã¹ Returns: pd.DataFrame: èªã¿èŸŒãã ããŒã¿ãæ ŒçŽããPandasããŒã¿ãã¬ãŒã """ return pd.read_csv(csv_file) def define_area (points: list [ tuple [ float , float ]]) -> Polygon: """ãšãªã¢ãæ§æãã座æšãªã¹ããåãåããPolygonãªããžã§ã¯ããè¿ã Args: points (list of tuples): ãšãªã¢ãå®çŸ©ãã座æšã®ãªã¹ã [(x1, y1), (x2, y2), ...] Returns: Polygon: Shapelyã®Polygonãªããžã§ã¯ã """ return Polygon(points) def count_objects_in_area (df: pd.DataFrame, area_polygon: Polygon) -> dict : """ããŒã¿ãã¬ãŒã ãšãšãªã¢ã®ããªãŽã³ãåãåãããšãªã¢å
ã®ãªããžã§ã¯ããã¯ã©ã¹ããšã«ã«ãŠã³ããã Args: df (pd.DataFrame): ãªããžã§ã¯ãã®ããŒã¿ãå«ãããŒã¿ãã¬ãŒã area_polygon (Polygon): 察象ãšãªã¢ãå®çŸ©ããPolygonãªããžã§ã¯ã Returns: dict: åã¯ã©ã¹ããšã®ãªããžã§ã¯ãæ°ãtracking_idã§éçŽããèŸæž """ counts = {} for _, row in df.iterrows(): class_label = row[ 'class_label' ] tracking_id = row[ 'tracking_id' ] center_x = row[ 'center_x' ] center_y = row[ 'center_y' ] # ãªããžã§ã¯ãã®äžå¿ç¹ããšãªã¢å
ã«ããããç¢ºèª point = Point(center_x, center_y) if area_polygon.contains(point): if class_label not in counts: counts[class_label] = set () counts[class_label].add(tracking_id) return counts def print_counts (counts: dict ) -> None : """ã«ãŠã³ãçµæãåºåãã Args: counts (dict): åã¯ã©ã¹ã®ãªããžã§ã¯ãæ°ãä¿æããèŸæž """ for class_label, tracking_ids in counts.items(): print (f "{class_label}: {len(tracking_ids)} objects" ) def parse_args () -> tuple [ str , list [ tuple [ float , float ]]]: """ã³ãã³ãã©ã€ã³åŒæ°ãè§£æãã Returns: tuple: CSVãã¡ã€ã«ã®ãã¹ (str) ãšãšãªã¢ã®åº§æšãªã¹ã (list of tuples) ãå«ãã¿ãã« """ parser = argparse.ArgumentParser(description= "Count objects in a specified area from a CSV file." ) parser.add_argument( "csv_file" , type = str , help = "Path to the CSV file containing the object data." ) parser.add_argument( "area_points" , type = float , nargs= '+' , help = "List of coordinates defining the area (x1 y1 x2 y2 ...)." ) args = parser.parse_args() # ãšãªã¢ã®åº§æšããã¢ã«åå²ããŠãªã¹ãã«å€æ if len (args.area_points) % 2 != 0 : raise ValueError ( "The number of coordinates for area_points must be even." ) area_points = [(args.area_points[i], args.area_points[i+ 1 ]) for i in range ( 0 , len (args.area_points), 2 )] return args.csv_file, area_points def main () -> None : # ã³ãã³ãã©ã€ã³åŒæ°ãè§£æ csv_file, area_points = parse_args() df = load_data(csv_file) area_polygon = define_area(area_points) # ãªããžã§ã¯ãã®ã«ãŠã³ããå®è¡ counts = count_objects_in_area(df, area_polygon) # çµæãåºå print_counts(counts) if __name__ == "__main__" : main() ãã®ã¹ã¯ãªããã§ã¯æå®ãããšãªã¢ãééãããªããžã§ã¯ããåã¯ã©ã¹ããšã«ã«ãŠã³ãã衚瀺ããŸã ã¹ã¯ãªããã®äœ¿ç𿹿³ ã³ãã³ãã©ã€ã³ã§ã¹ã¯ãªãããå®è¡ããCSVãã¡ã€ã«ã®ãã¹ãšãšãªã¢ã®åº§æšãæå®ããŸãã CSVãã¡ã€ã«ã¯å
ã»ã©æ€åºã»è¿œè·¡çµæã倿ãäœæãããã¡ã€ã«ãæå®ãããšãªã¢ã®åº§æšã¯ãšãªã¢ãæ§æãã3ç¹ä»¥äžã®å€è§åœ¢ã®åé ç¹ã®x座æšãšy座æšã®ãã¢ã§æå®ããŸãã 座æšã®ç¢ºèªæ¹æ³ã¯åç»ããåãåºããç»åã䜿çšããŠãã€ã³ãçã§ååŸããããšãã§ããŸããã ãã¡ã ã®ãããªå€éšã®ãµã€ãã䜿çšããŠååŸããããšãå¯èœã§ã 次ã®äŸã§ã¯ã data.csv ãšãããã¡ã€ã«ã䜿çšãããšãªã¢ãæ§æãã4ã€ã®åº§æš (400, 400), (600, 400), (400, 600), (600, 600) ãæå®ããŠããŸãã python count_object.py output_video_results.csv 400 400 600 400 400 600 600 600 ã¹ã¯ãªãããå®è¡ããããšã§ãæå®ãããšãªã¢å
ã«ããåã¯ã©ã¹ã®ãªããžã§ã¯ãæ°ã確èªããããšãã§ããŸãã car: 23 objects truck: 6 objects èª²é¡ ã¯ã©ã¹èª€æ€ç¥ çŸåšã®ã·ã¹ãã ã§ã¯ããããã³ãæ£ç¢ºã«åé¡ã§ããããã°ãã°ãã¹ããã©ãã¯ãšããŠèª€èªèããåé¡ãçºçããŠããŸãããã®åå ãšããŠãYOLOv8ã§äœ¿çšããŠããã¢ãã«ãCOCO Datasetãåºã«åŠç¿ãããŠããããšãæããããŸããCOCO Datasetã¯äž»ã«æµ·å€ã®ç»åã䜿çšããŠãããæµ·å€ã§ã¯ãããã³ã®æ®åçãäœããããã¢ãã«ããããã³ãååã«åŠç¿ããŠããªãå¯èœæ§ããããŸãã ãã®åé¡ã解決ããããã«ã¯ãåœå
ã®ããŒã¿ãåéãããããçšããŠã¢ãã«ã远å åŠç¿ïŒãã¡ã€ã³ãã¥ãŒãã³ã°ïŒããããšãèããããŸããããã«ããããããã³ã®èªè粟床ãåäžããã誀èªèã®ãªã¹ã¯ãæžããããšãæåŸ
ã§ããŸãã ãŸãšã ä»åã®èšäºã§ã¯ãYOLOv8ãçšããéè¡éã«ãŠã³ãã·ã¹ãã ã®æ§ç¯æé ã詳现ã«è§£èª¬ããŸããããã®ã·ã¹ãã ã¯ãYOLOv8ã«ããé«ç²ŸåºŠãªç©äœæ€åºæ©èœã掻çšããéè·¯ãé§è»å Žãšãã£ãç¹å®ãšãªã¢å
ãééããè»äž¡ãèªåçã«ã«ãŠã³ããããã®ã§ãã æ¬å®è£
ã¯ãåºæ¬çãªè»äž¡ã«ãŠã³ãã·ã¹ãã ã§ãããæ©èœãæ¡åŒµããããšã§ããŸããŸãªçšéã«å¿çšå¯èœã§ãã以äžã«ãããã€ãã®å®è£
ã¢ã€ãã¢ã玹ä»ããŸã 倿§ãªãªããžã§ã¯ãæ€åº æ©èœïŒè»äž¡ä»¥å€ã®ãªããžã§ã¯ããæ€åºå¯èœ ç¹åŸŽïŒCOCOããŒã¿ã»ããã®80çš®é¡ã®ã¯ã©ã¹ã«å¯Ÿå¿ å®è£
ïŒã¯ã©ã¹æå®ã®å€æŽã§ç°¡åã«å®çŸ å¿çšäŸïŒæ©è¡è
ãèªè»¢è»ãéçåç©ã®èª¿æ» è€æ°ãšãªã¢éã®ç§»å远跡 æ©èœïŒç¹å®ãšãªã¢éã®ç§»åãªããžã§ã¯ããã«ãŠã³ã ç¹åŸŽïŒè€æ°ããªãŽã³ãšãªã¢ã§ãªããžã§ã¯ãã®è»è·¡ã远跡 å®è£
ïŒãšãªã¢å®çŸ©ãšè»è·¡è¿œè·¡ããžãã¯ã®è¿œå å¿çšäŸïŒäº€å·®ç¹ã®å³å·Šæè»äž¡èšæž¬ãåºèã®å
¥éåºå®¢æ°ææ¡ ãªã¢ã«ã¿ã€ã ã«ãŠã³ã æ©èœïŒã©ã€ãæ åããã®ãªã¢ã«ã¿ã€ã è§£æ ç¹åŸŽïŒå³æçãªããŒã¿ååŸãšåæãå¯èœ å®è£
ïŒå
¥åãœãŒã¹ãé²ç»åç»ããã©ã€ãã«ã¡ã©æ åãžå€æŽ å¿çšäŸïŒäº€ééã¢ãã¿ãªã³ã°ãã€ãã³ãäŒå Žã®äººæµåæ æåŸã« ã»ãŒãã£ãŒã§ã¯ãšã³ãžãã¢ãç©æ¥µçã«åéããŠããŸããæ°ã«ãªãæ¹ã¯ãã¡ããã芧ãã ãã https://safie.co.jp/teams/engineering/ ã«ãžã¥ã¢ã«é¢è«ããåãä»ããŠãããŸãã®ã§ãæ°è»œã«å¿åããã ããã°ãšæããŸã æåŸãŸã§ãèªã¿ããã ããããããšãããããŸããã