TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

My name is Nakagawa, and I am the team leader of the data engineering team in the analysis group at KINTO Technologies. Recently, I have become interested in golf and have started to pay attention to the cost per ball. My goal this year is to make my course debut! In this article, we would like to introduce the efforts of our data engineering team in efficiently developing KINTO's analytics platform and providing the data necessary for analysis in line with service launches. Data Engineering Team’s Goal The data engineering team develops and operates an analytics platform. An analytics platform plays a behind-the-scenes role that involves collecting and storing data from internal and external systems, and providing it in a form that can be utilized for business. Our goal is as follows so that data can be utilized immediately upon the launch of services: __ "In line with the launch of various services, we will aggregate data on our analytics platform and provide it immediately!"__ Challenges However, with the expansion of the KINTO business and while we set the above-mentioned roles and goals, the following challenges have arisen. Limited development resources (as we are a small, elite team) An increase in systems to be linked due to business expansion An increase in modifications is proportional to an increase in the number of linked systems. (Note: The increase in modifications is also influenced by our agile business style of "starting small and growing big.") Solutions To solve the above challenges, we use AWS Glue for ETL. From the perspective of reducing workloads, we have focused on two aspects―operations and development. We have approached the challenges using the following methods. Standardization aimed at no-code Automatic column expansion for a faster, more flexible analytics platform Our company’s AWS Analytics Platform Environment Before explaining the two proposed improvements, I would like to explain our analytics platform environment. Our analytics platform uses AWS Glue for ETL and Amazon Athena for the database. In the simplest pattern, its structure is as shown in the diagram below. The structure involves loading data from source tables, accumulating raw data in a data lake in chronological order, and storing it in a data warehouse for utilization. When developing workflows and jobs for data linkage using AWS Glue, KINTO Technologies use CloudFormation to deploy a series of resources, including workflows, triggers, jobs, data catalogs, Python, PySpark, and SQL. The main resources required for deployment are as follows: YAML file (workflow, job, trigger, and other configuration information) Python shell (for job execution) SQL file (for job execution) As mentioned above, the development work workloads increased in proportion to an increase in services, tables and columns. This began to strain our development resources. As described in the previous solutions, we addressed the challenges by implementing two main improvements. I would like to introduce the methods we used. Standardization aimed at no-code "Standardization aimed at no-code" was carried out in the following steps. Step 1 in 2022: Standardization of Python programs Step 2 in 2023: Automatic generation of YAML and SQL files In the improvement related to Python shell in Step 1, we focused on the fact that, up until now, workflow development was performed on a per-service basis, and the Python shell was also developed, tested, and reviewed on a per-workflow basis. This approach led to an increase in workloads. We moved forward with program standardization by unifying parts of the code that had been reused with slight modifications across different workflows, and by making them more general-purpose to accommodate variations in data sources. As a result, while we are currently focusing on intensive development and review of the common code, there is no longer any need to develop source code for each workflow. If the data source is Amazon RDS or BigQuery, all processing, including data type conversion to Amazon Athena, can now be handled within the standardized part. Therefore, when starting data linkage for each service, it is now possible to achieve no-code data linkage by simply writing settings in a configuration file. Step 2, the automatic generation of YAML and SQL files, improves the configuration files that remained as necessary parts in Step 1, as well as View definitions required for linkage with the source side. We improved these by using GAS (Google Apps Script) to automatically generate configuration files such as YAML and SQL for the View. This minimizes the development work by simply setting the minimal necessary definitions, such as workflow ID and table names that need to be linked, on a Google Spreadsheet, which automatically generates YAML files for configuration and SQL files for the View. Automatic column expansion for a faster, more flexible analytics platform In "Automatic Column Expansion for a Faster, More Flexible Analytics Platform," before the improvement, table definitions and item definitions that have been already defined at the data linkage source had been also defined on the analytics platform side in YAML.[^1] Therefore, at the time of initial establishment, it was necessary to define as many items on the analytics platform side as on the data linkage source side, resulting in a need for approximately 800 to 1,200 item definitions per service on average (20 to 30 tables × 20 items × both lake and DWH). Our company is constantly expanding its services based on the philosophy of “starting small and growing big,” which frequently results in backend database updates. This update process also requires carefully identifying and modifying relevant portions from among the previously set 800 to 1,200 definition items, which has significantly increased development workloads. So what we came up with was a method in which, when accessing the data linkage source for data linkage, the item definition information is also linked at the same time, allowing automatic updates of the item definitions on the analytics platform. The idea is that since the properly developed information is already present on the source side, there is no reason not to take advantage of it! The specific implementation method for column auto-expansion is carried out using the following steps. glue_client.get_table Retrieve table information from the AWS Glue Data Catalog. Replace table['Table']['StorageDescriptor']['Columns'] with the item list col_list obtained from the data linkage source. Update AWS Glue's Data Catalog with glue_client.update_table . def update_schema_in_data_catalog(glue_client: boto3.client, database_name: str, table_name: str, col_list: list) -> None: """ Args: glue_client (boto3.client): Glue client database_name (str): Databse naem table_name (str): Table name col_list (list): Column list of dictionary """ #AWS Glueのデータカタログからテーブル情報を取得 table = glue_client.get_table( DatabaseName = database_name, Name = table_name ) #col_listでColumnsを置換え data = table['Table'] data['StorageDescriptor']['Columns'] = col_list tableInput = { 'Name': table_name, 'Description': data.get('Description', ''), 'Retention': data.get('Retention', None), 'StorageDescriptor': data.get('StorageDescriptor', None), 'PartitionKeys': data.get('PartitionKeys', []), 'TableType': data.get('TableType', ''), 'Parameters': data.get('Parameters', None) } #AWS Glueのデータカタログを更新 glue_client.update_table( DatabaseName = database_name, TableInput = tableInput ) In addition to these, when creating an item list obtained from the linkage source, we also perform mapping of different data types for each database in the background. By doing so, we can generate item definitions on the analytics platform based on the schema information from the source side. One point we paid attention to with the automatic updating of item definitions on the analytics platform side is that the table structure of the analytics platform under our management could change unexpectedly without our knowledge. To address this concern, we have implemented a system that sends a “notification” to Slack whenever a change occurs. By doing this, we can prevent the issue of the table structure changing unexpectedly without our knowledge. The system detects changes, and after checking the changes with the source system, linkage of the changes to subsequent systems as needed is possible. [^1]: I won’t go into details here, but AWS Glue includes a crawler that updates the data catalog. However, due to issues such as the inability to update with sample data or perform error analysis, we have decided not to use it. Conclusion What are your thoughts? This time, I have introduced two methods of using AWS Glue in our analytics platform: “standardization aimed at no-code” and “automatic column expansion for faster, more flexible analytics platform.” By improving these two points, we have succeeded in reducing the development workloads. Now, even for a data linkage job involving 40 tables, the development workloads can be reduced to about one person-day, which has enabled us to achieve our goal of "aggregating data into the analytics platform and providing it immediately in line with the launch of various services!" I hope this will serve as a useful reference for those who wish to reduce development workloads in a similar way!
アバター
Introduction Hello! My name is Miura and I work in the Development Support Department at KINTO Technologies, assisting the Global Development Department. My day-to-day work includes managing tools for the Global Development Division, supporting office operations to create a smoother working environment for team members, and handling various inquiries. Lately, I've been really into following my favorite band. They're only active for one year, so I've been chasing their shows wherever I can! Now, back to the topic. Since most of my work involves a lot of detailed admin tasks, I try to find ways to make small improvements every day. In this article, I'll introduce some of the kaizen initiatives I've implemented at KINTO Technologies. Kaizen So Far At KINTO Technologies, being part of the Toyota Group, we often use the term kaizen rather than improvement. Here's how we define it:🔻 Kaizen refers to the practice of eliminating waste in tasks or workflows and continuously improving the way we work to focus on higher-value activities. ^1 Since joining the company, I've carried out the following kaizen activities: [1] Revising and updating mailing list management [2] Revising the logbook and approval route for lending security cards [3] Managing test devices [4] Creating name tags for shared umbrellas Let's take a closer look at the background, actions taken, and effects. [1] Revising and Updating Mailing List Management📧 This initiative began in my very first month after I joined the company, when I tried to call members for a meeting, but I had no idea who was on the mailing list. Although the Development Support Division where I belong, had an internal mailing list, the Global Development Division didn't have anything like that! So I thought, why not create a similar one? But first, I had to identify which mailing lists even existed. Once I pulled the data, I was shocked! There were 94 mailing lists currently in use! Are we really using all of these? This question led me to carry out a full audit. First, I followed the example set by the Development Support Division and created a similar list in Excel. I set up a matrix with registered members on the vertical (Y) axis, mailing lists on the horizontal (X) axis, and used a ● for registrants. Mailing List (Excuse the heavy redactions🤣) Each team leader reviewed the table, and I carried out an audit by confirming list administrators, clarifying the purpose of each list, and verifying registered members. To make the mailing list information accessible to everyone, I shared the table via our cloud storage, BOX. To prevent the list from becoming outdated, I set up a process where any update requests must be submitted through a JIRA ticket, and I retained sole editing rights. Having a list makes it easy to check who was registered to which list and what types of lists existed. It also helped raise awareness across the Global Development Division that mailing lists don't update automatically. Another benefit of visualizing all the mailing lists was the ability to check for duplicates created for similar purposes. This yokoten (horizontal deployment) was possible because, although I belong to the Development Support Division, I also support the Global Development Division. [2] Revising the Logbook and Approval Route for Lending Security Cards At the Jinbocho Office, external vendors who come in more than twice a week are given security cards. It's a simple process, but the Excel file used for tracking didn't keep any history. So, I updated it to support change tracking and made it possible to easily identify which cards were currently unused. By using conditional formats and functions, only available cards could be selected. This prevents the accidental deletion of user information and makes audits much easier. Now automatically display the number of cards and available card numbers. Regarding the change in approval route, because I belong to the Development Support Division, I couldn't submit requests for security card issuance on my own. I had to ask a member from the Global Development Division to do it on my behalf, just to follow the correct approval route. This roundabout process was not in line with the actual work, so I raised the question with the relevant division, "Shouldn't we change this odd workflow?" After that, we organized the role and system of concurrent duties of the two divisions. Now, when I submit a request, I can select either the Development Support or Global Development approval route. This change eliminated the need for others to step in on my behalf and reduced the time spent on individual coordination.✨ [3] Managing Test Devices📱 Until now, test and verification devices such as smartphones used during system development were managed in a table on Confluence. But this made it difficult to see at a glance who was using which device, and the table often went out of date. In some cases, certain devices ended up being managed informally by individuals. At one point, someone almost purchased a new device without realizing we already had one. Around the same time, I found out that company-purchased books were being centrally managed using JIRA. That got me thinking, could we manage test devices the same way? ➡️ How We Made Book Management Easier As we transitioned to JIRA, I took the opportunity to do a full inventory check. This gave us visibility into whether anything was missing, broken, or not in use. (Some devices were even locked with unknown passwords.🔒) Because test devices are used on a daily basis, we physically checked each one during the audit. We recorded the password settings and uploaded photos of each device to their respective JIRA tickets. This helped resolve confusion when device names alone weren't clear enough. By managing the devices in JIRA, all members can check the rental status at a glance, and by setting rental expiration dates, we can now track usage. Visualization of lending conditions, detailed device information is included in the ticket. In addition, there is no longer the hassle of forgetting to update Confluence when borrowing or returning, or having to contact them via Slack every time. Most importantly, by linking devices to specific users and assigning return dates, I feel that all members have become more aware that they are "borrowing" the device. I also set up JIRA to send reminders to the admin when return deadlines are approaching. Rina-san helped me implement this based on the existing book management system. Thank you so much for your support! [4] Creating Name Tags for Shared Umbrellas☔ It all started with a request to "clean up umbrellas that have been left in the umbrella stand at the entrance." So, I checked the other umbrella stands as well. Any umbrellas that had been left for several days were announced internally and then disposed of. One comment I received in response to that announcement mentioned the idea of making the umbrellas available to anyone for shared use by putting a plastic tape over the clean umbrellas to be disposed of and repurpose them as office loaners. I noticed that many of the umbrellas in the office stands were clear plastic or plain designs. I figured the number of abandoned umbrellas would probably keep growing, and people might start grabbing the wrong ones by mistake. That reminded me of writing my name on some masking tape and made a name tag with a rubber band for my umbrella in the past. lol That worked fine for me, but I thought it would be nice if everyone had a name tag if possible, so I prepared Keychains. A keychain with your name on it to secure your umbrella.👍 This kaizen is not yet widespread, but I hope it will be used more and more, not only for umbrellas, but also as name tags to be attached to personal items stored in the refrigerator. Where Does the Kaizen Mindset Come From? Let me share the origins of my kaizen mindset. I've always enjoyed imagining things ever since I was a child. On my way to school, I used to often imagine things like, "Wouldn't it be cool if the road just moved on its own? ✨" or, "What if a shield popped up automatically when it rained?✨" (Kind of like something out of Doraemon, right?😅) I think kaizen is just an extension of that kind of thinking. I believe that great people follow that imagination into careers in research or engineering, but in my case, since I'm at an average level, it's more about solving the problems right in front of me. When I find myself thinking "If only this were easier…🤔"— this is when kaizen starts. When it comes to work, the fundamental principle is "Making work easier" means "making work enjoyable." Who wouldn't be happier if their job got just a little bit easier? Eventually, those easier ways of working become the norm. The starting point is to make things easier for myself, but I also take the other person or people who will use it into consideration as I go along. Whenever I’m doing something repetitive or routine, I find myself thinking, "Wouldn't it be nice if this were easier?” It may be difficult to fully realize that idea by myself, but ideally, the things that have already become easier now will eventually become norms, and whoever takes over from me will go on to make them even better. I'd be thrilled if the improvements I made didn't stay as the final version, but went beyond me and continued to evolve in someone else's hands. Something like this is exciting to imagine, isn't it? Next Kaizen - The Next Issue I Want to Tackle Some recurring tasks are still handled in Excel, and I want to streamline them further, possibly by using macros. So, I've recently started trial and error using Sherpa ^2 which was just released internally, as well as ChatGPT. With a kaizen mindset at the core, I'll continue working to make things better!✨
アバター
こんにちは!SREチームのkasaiです。 KINTOテクノロジーズ株式会社(以下、KTC)は、2025年7月11日(金)〜12日(土)にTOC有明で開催される「SRE NEXT 2025」にて、プラチナスポンサーとして協賛いたします! KTCがSRE NEXTのスポンサーになるのは今回が初めてです。 弊社SREチームは昨年から再スタートを切りました。 SREを実践する難しさを日々感じつつも、サービスの信頼性を高めるための活動に取り組んでいます。みなさんも同じように試行錯誤を重ねているのではないかと思います。 そんなSREの方々が集まる場を支えられればと思い、スポンサーに立候補いたしました! SRE NEXTとは 信頼性に関するプラクティスに深い関心を持つエンジニアのためのカンファレンスです。 同じくコミュニティベースのSRE勉強会である「SRE Lounge」のメンバーが中心となり運営・開催されます。 SRE NEXT 2025のテーマは「Talk NEXT」です。SRE NEXT 2023で掲げた価値観 Diversity、Interactivity、Empathyを大切にしつつ、SREの担う幅広い技術領域のトピックや組織、人材育成に対してディスカッションやコミュニケーションを通じて、新たな知見や発見を得られる場にします。 Home | SRE NEXT 2025 開催概要 開催日:2025年7月11日(金)・12日(土) 会場:TOC有明及びオンライン 公式サイト: https://sre-next.dev/2025/ スポンサーセッションあります! DAY 2 (7/12) 13:00 - 13:20にTrack Bにて「ロールが細分化された組織でSREは何をするか?」というタイトルで長内がスポンサーセッションをする予定です。 細分化された組織の中においてロールが重なり合う中「自分たちは何をすべきか?」「SREとしての価値はどこにあるのか?」といった問いにSREチームがどのように向き合ってきたのかをお話しします。 詳細: https://sre-next.dev/2025/schedule/#slot081 ブース出展もします! ブースでは簡単に答えられるアンケートを用意しています。 ご回答いただくとガチャガチャが回せて、オリジナルノベルティーが当たりますので、ぜひブースに遊びにきてください! 当日はSREのメンバーもブースにいますので、SREについてTalkしましょう!
アバター
I am Aritome from the Development Support Division at KINTO Technologies. I am in charge of organizing all-hands meetings, supporting engineer development and training programs. At KINTO Technologies (KTC), we support our engineers' growth through their work at the company. For this reason, we actively encourage participation in communities outside the company and speaking at external events. (President Kotera and Vice President Kageyama also frequently speak at externally hosted events.) On February 8, 2023, Wada-san, a young engineer from our data analytics team, joined a panel discussion as a guest speaker at the Digital Human Capital Development Seminar in Chubu , hosted by the Central Japan Economic Federation and the Digital Literacy Council. What did you talk about? What is your role at our company? I interviewed Wada-san after the seminar to find out more. To start with, could you introduce yourself? Wada : Hello! My name is Wada and I work as a data scientist at KINTO Technologies. My main job is responding to analysis requests from both inside and outside the company, and developing AI functions for in-house apps. Thank you for having me today! Aritome : Thank you! Can you tell us about your career path before joining KINTO Technologies? Wada : I majored in social informatics at university. It's not a familiar term, but basically, it's an applied field of informatics that focuses on using information and communication technologies to solve social issues. After graduating from university, I joined an automotive parts manufacturer in 2019, where I worked on production management systems. Then in 2022, I made the move to my current role. What was the theme of the event, and what led to you speaking at the event? Wada : The Digital Human Capital Development Seminar in Chubu was aimed at management and mid-level employees of various companies in the Chubu region, which stressed the importance of all employees acquiring digital literacy from now on. At the event, three specific qualifications that will lead to acquiring digital literacy were recommended. The Information Technology Passport Examination, Data Scientist Certificate, and JDLA Deep Learning for GENERAL (G-certificate) In the latter half of the event, a panel discussion was held featuring Ryutaro Okada, Board Director and Secretary General of the Japan Deep Learning Association, along with four panelists who had gained digital literacy by obtaining certifications. The discussion covered what they found beneficial about earning the certifications, challenges they faced, and how the experience has influenced their work. I also hold the JDLA Deep Learning for ENGINEER certification (commonly known as the E-Certificate) There was a call for panelists for the event within the certification holders' community, and that's how I got the opportunity to take part in the event. Photo of the event venue Aritome : I've been hearing a lot about the G-Certificate lately. Can you tell us more about it? Can you tell us more about the G-Certificate? Wada : The G-Certificate is a qualification that tests basic knowledge of deep learning. The G stands for 'Generalist,' and the test covers not only the meaning of technical terms, but also knowledge of the history of technology and legal regulations. It does not require much knowledge of math or coding, so it is also recommended for non-engineers! There's also a related qualification called E-Certificate, which is more focused on deep learning theory and implementation skills. If you hold either, you can join a community called CDLE (Community of Deep Learning Evangelists). That's the community where I found the call for panelists for this event. CDLE is a community exclusively for people who've passed either the G-Certificate or the E-Certificate, both run by the Japan Deep Learning Association (JDLA). It's a space for certified members to connect and share knowledge. It operates entirely on a non-profit basis. *Quoted from the CDLE guidelines, CDLE community website . Aritome : So, there's a community of certified members. With that shared learning experience, the conversation's sure to be lively! What motivated you to get certified in the first place? Wada : I thought that obtaining a certification would be the most efficient way to acquire systematic knowledge! When I first started learning about AI, I was mostly referencing sample code I found online and diving into machine learning and deep learning without really understanding how anything worked. At first, it was fun to see things run, but gradually I became interested in the mechanics behind. That's when I began reading more advanced books and technical blogs. However, learning this way gave me only bits of knowledge. It was tough to learn the field in a way that was both systematic and comprehensive. So I decided to take the certification exam, since its syllabus was packed with carefully curated content and suited for obtaining systematic knowledge. To put it in an analogy, it's like filling a container with your favorite pebbles, each representing bits of knowledge, but there are still gaps. The syllabus is like water that fills those gaps with structured learning! (Does that make sense?) Image of knowledge acquisition Aritome : I totally get that feeling of not knowing where to start when trying something new. When you're self-taught, it's hard to feel confident if your knowledge is all over the place. What challenges did you face and how did you approach studying for the certification? Wada : I had a certain level of understanding of how to use the technology from my self-study, but I had to re-learn the background, basic technology, history leading up to the technology, and legal frameworks. In addition, at that time, the E-certificate exam didn't use any specific frameworks, and the questions were based on scratch implementations using NumPy. Since I had been working with scikit-learn and Keras, getting used to the unfamiliar syntax was definitely a challenge. But I wanted to fill in the gaps in my knowledge, so it was a perfect match for my original goal, worth the effort (laughs). Aritome : Because it's a certification, I imagine you really have to study the full scope of the field, even areas you're not as comfortable with. It sounds like a challenge! Did getting the certification or studying a new field lead to any changes for you? Wada : Learning all the key terminology around AI gave me the confidence to start tackling more advanced books, including academic papers I wouldn't have dared to touch before. I can't say I breeze through them, but "Ohhh! I can read! I'm reading!"(laughs) Aritome : That sense of growth must make all the effort feel worthwhile! What were some of the best things about being certified? Wada : Nowadays, AI is being integrated into many different areas, creating significant value I think having the ability to look at different areas and ask, "What if I combined AI with this?"will become one of my personal strengths. With tools like ChatGPT lowering the barrier to entry, I believe we'll see even more accessible AI services emerging, and this trend will only continue to grow. At KINTO Technologies, are there any systems or cultural elements in place to support learning? Wada : There's a strong culture of sharing what we learn. We have study sessions across different scopes, within teams, across departments, and company-wide. Even small information sharing is encouraged. Our tech news Slack channel is constantly buzzing with interesting updates. You can also easily request to purchase books that are useful for work, and you can access a variety of books on the online bookshelf shared between offices. If the opportunity comes up, like my case, you're free to speak at external events, too! What kind of employees are there at KINTO Technologies? Wada : My first impression after joining was, "There are all kinds of people here!"(lol) At my previous job, almost everyone was a new graduates, so coming into a company where everyone is mid-career was a big change. Everyone brings their own specialty from past experience, and it's really inspiring to see those strengths complement each other to get things done! I am expected to work as a specialist in the AI field, which makes it a really rewarding environment where I can keep growing. Is there anything you personally do to promote a learning culture? Wada : I try to be open about my own skills, what I've been learning, and what I'm interested in. It leads to people saying things like "I found this article" or asking "Can you explain this?" While I'm explaining, I often learn something new, too. It creates a great feedback loop. Lastly, do you have a message for our readers? Wada : I wasn't able to talk much about technical side this time, but I'd like to write more about the AI products I work on in the future! Thank you for reading all the way to the end! We Are Hiring! We are looking for people to work with us to create the future of mobility together. If you are interested, please feel free to contact us for a casual interview. @ card
アバター
Introduction Hello, I am Nishida, a member of the payment platform development team at KINTO Technologies. In this article, I'd like to share how we used AWS SAM to build the backend for an internal payment operations system, which was also introduced earlier in this article . What is AWS SAM? First off, AWS SAM (Serverless Application Model) is a tool that makes it easy to build and deploy serverless services like Lambda and API Gateway. With AWS SAM, developers no longer need in-depth knowledge of infrastructure and can focus on building applications using a serverless architecture. Why We Chose AWS SAM Right after joining KINTO Technologies, I became involved in developing a payment operations system. Given the short development timeline of just 2 to 3 months, we needed to select backend technologies that supported rapid iteration Since it was an internal system with limited traffic, we decided to go with AWS SAM, leveraging my prior experience with it from a previous role. How to Use AWS SAM I'd like to use AWS SAM to build a REST API using API Gateway and Lambda in a serverless setup. Here's what the directory structure looks like: . ├── hello_world │ ├── __init__.py │ └── app.py └── template.yaml First, install AWS SAM from the official documentation . AWS SAM uses a file called a template to manage AWS resources. AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > sam-app Sample SAM Template for sam-app Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: FunctionName: HelloWorldFunction CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.9 Events: HelloWorld: Type: Api Properties: Path: /hello Method: get import json def lambda_handler(event, context): body = { "message": "hello world", } response = { "statusCode": 200, "body": json.dumps(body) } return response We deploy using the sam command. This time, I'll try deploying interactively using the --guided option. sam deploy --guided Enter the stack name, region, etc. Stack Name [sam-app]: # デプロイするスタック名を入力 AWS Region [ap-northeast-1]: # デプロイするリージョンを入力 #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: # 変更内容を確認するかを入力 #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: # SAM CLI が IAM ロールを作成するかを入力 #Preserves the state of previously provisioned resources when an operation fails Disable rollback [y/N]: # ロールバックを無効にするかを入力 HelloWorldFunction may not have authorization defined, Is this okay? [y/N]: # Lambda に対する認可を設定するかを入力 Save arguments to configuration file [Y/n]: # 設定を保存するかを入力 SAM configuration file [samconfig.toml]: # 設定ファイルの名前を入力 SAM configuration environment [default]: # 環境名を入力 Once the deployment is complete, check the Lambda console to confirm that HelloWorldFunction has been created. You can also find the endpoint by selecting the API Gateway that triggers Lambda. Let's try sending a request using curl. curl https://xxxxxxxxxx.execute-api.ap-northeast-1.amazonaws.com/Prod/hello If the request is successful, you'll get a response like this: {"message": "hello world"} After Trying It Out As I had prior experience with AWS SAM, I was able to get the basic infrastructure up and running in just a day, which helped us stay on track with the development schedule. Once you're familiar with it, one of the best things about AWS SAM is how easy it makes building APIs in a serverless setup. In addition to API Gateway and Lambda, we also use AWS SAM to build EventBridge and SQS, which are used for periodic processing such as batch processing. The official documentation has also improved a lot, which I think has lowered the barrier to getting started. Conclusion In this article, I shared how we quickly built the backend for a payment operations system from scratch using AWS SAM. Since it's a tool provided by AWS, it has high compatibility, reduces the overhead of environment setup, and allows you to focus more on actual development. If you're interested, I highly recommend giving it a try.
アバター
Introduction Hello. My name is Shimamura , and I used to be a DevOps engineer in the Platform Group, but now I'm on the Operation Tool Manager team within the same gorup, where I'm responsible for Platform Engineering and tool-related development and operations. KINTO Technologies' Platform Group promotes IaC using Terraform. We define design patterns that are frequently used within the company and provide them as reference architectures, and each environment is built based on those patterns. For the sake of control, each environment from development to production is built upon ticket-based requests. Before building the development environment, we prepare a sandbox environment (AWS account) for the application department's verification. However, this is often built manually and there are many differences with the environment built by Platform Group. If a design pattern were available, the environment could be automatically built upon developer request, which would eliminate the waiting time between the request and the creation of the environment, and improve development efficiency. I think this kind of request-based automated building is a common requirement in DevOps, but it seems that Kubernetes is still the most commonly used application execution platform. KINTO Technologies uses Amazon ECS + Fargate as its application execution platform, so I would like to introduce this as a (probably) rare example of automated environment building for ECS. Background Challenges The system is not around when application developers need it (during verification/launch) As part of the DevOps activities, I researched AutoProvisioning (automated environment building) and felt that it was common, but it is not present within our company. There is a large difference between an environment built in a sandbox environment with a relatively high degree of freedom and an environment built according to the design patterns provided by Platform Group. IAM permissions and security Presence of common components such as VPC/Subnet/NAT Gateway etc. As a result, the communication costs becomes higher for both parties when requesting a build. Solution Why not create an automated building mechanism? Since this is a design pattern, there are some AWS services that may be missing, but it's tolerable and presumably they will be added manually. As a first step, it's worthwhile to automatically build an environment on AWS in about an hour so that you can check the operation of your application and prepare for CICD. Let's Make It Thankfully, Terraform is becoming more modular so we can build environments in a variety of patterns by simply writing a single file (locals.tf), so I think of the below as a base: Used in-house created modules (Must) Built with in-house design patterns as a base (Must) Made sure that DNS is automatically configured and communication is possible via HTTPS. It should be able to automatically generate locals.tf Prototyped the application to see if it can be structured and generated using Golang's HCLWrite After prototyping, I found that structuring was difficult, so I eventually gave up on automatic generation. I took care of it by replacing some parameters from the template file Since the process was about replacing, detailed settings for each component are not possible. The Final Result From the GUI on the CMDB select product design patterns When you select this and click Create New, the specified configuration will be built in the sandbox environment of the department associated with the product in 10 to 40 minutes (depending on the configuration). Overall Configuration Individual Explanation I separated the part that creates the Terraform code from the part that actually builds it in the sandbox environment so that they could be tested separately. Terraform Code Generation Parts ProvisioningSourceRepo Issue management GitHub Actions execution Terraform code for the created sandbox environment CIDR list for each sandbox environment ProvisioningAppRepo Template for design pattern Yaml (buildspec.yml) in CodeBuild Various ShellScripts running on CodeBuild InfraRepo TerraformModule AWS Environment Building Part S3 Source and Artifact in CodePipeline EventBridge CodePipeline Trigger CodePipeline/CodeBuild Actual construction environment Route53 (Dev) Delegate authority from the production DNS and use Route53 in the Dev environment Terratest (Apply) The Terratest sample looks like this. The test is nested so that if any of the Init, Plan, or Apply steps fail, the test will end. If the Apply step fails midway, Destroy what was applied up to that point. I think you will be able to write it more neatly if you have knowledge of Golang. package test import ( "github.com/gruntwork-io/terratest/modules/terraform" "testing" ) func TestTerraformInitPlanApply(t *testing.T) { t.Parallel() awsRegion := "ap-northeast-1" terraformOptions := &terraform.Options{ TerraformDir: "TerraformファイルがあるPATH" + data.uuid, EnvVars: map[string]string{ "AWS_DEFAULT_REGION": awsRegion, }, } // InitでErrorがなければPlan、PlanでErrorがなければApplyと // IFで入れ子構造の対応を実施(並列だとInitで失敗してもテストとしてすべて走る) if _, err := terraform.InitE(t, terraformOptions); err != nil { t.Error("Terraform Init Error.") } else { if _, err := terraform.PlanE(t, terraformOptions); err != nil { t.Error("Terraform Plan Error.") } else { if _, err := terraform.ApplyE(t, terraformOptions); err != nil { t.Error("Terraform Apply Error.") terraform.Destroy(t, terraformOptions) } else { // 正常終了 } } } } Elements Name Overview CMDB (in-house production) Configuration Management Database to manage databases Since rich functions were unnecessary, KINTO Technologies has developed an in-house CMDB. On top of that, we are creating a request form for automatic building. In addition, after being built, FQDN and other information are automatically registered in the CMDB. Terraform A product for coding various services, AWS among them. IaC. In-house design patterns and modules are created with Terraform. GitHub A version control system for storing source code. Build requests are logged by raising an Issue. Also, since Terraform code is required for deletion, etc., we also save each code for the sandbox environment. GitHubActions The CI/CD tool included in GitHub. At KINTO Technologies, we utilize GitHub Actions for tasks such as building and releasing applications In this case, we are using the issue filing as a trigger to determine whether to Create/Delete, select the necessary code group, compress it, and connect to AWS. CodePipeline/CodeBuild CICD-related tools provided by AWS. Using it to run Terraform code. We could run Terraform/Terratest on GitHubActions, but since we use GitHubActions daily for application builds, we chose to use this to avoid the impact on each product team due to usage limits, etc. Terratest A Go library for testing infrastructure code, etc. You can also test modules, but in this case we are using it to recover from failures in the middle of Terraform Apply. Click here for the official site Restrictions We target multiple sandbox environments (AWS accounts) associated with each development team, but only one can be created at a time (exclusive). Since CodePipeline/CodeBuild are running in the same environment due to DNS We also create parts that are not run in the application. It may seem like there is a lot of waste, but this is due to the build design pattern. It is built as a seamless line from FQDN to DB. You need to set the VPC, etc. in the Module beforehand. You need to build a set of common components such as VPC beforehand. What to Do if There Are No Modules KINTO Technologies has been working on design patterns for some time, so we have the advantage of being able to easily use Terraform to build everything from CloudFront to RDS. What can you do if you haven't progressed that far but still want to implement AutoProvisioning using ECS? I Thought About It Create up until the ECS Cluster in advance. ECS Service ECR Repository ALB TargetGroup ALB ListenerRule IAM Role Route53 I think it would be easier to prepare a Terraform file with the above, and then build it. TaskDefinition can be created if you have permission, so it's up to the user. Configuration Proposal I think CodePipeline/CodeBuild would be fine instead of GitHubActions, but when you consider the need to prepare a GUI like CodeCommit, wouldn't it be easier to just put it all together on GitHub? So, here is the configuration. I haven't used AWS Proton yet, so I haven't considered it. I think it would be possible to separate the Parameter parts such as locals.tf and create them using the sed command or Golang's HCL library. Once you have confirmed the build using Terratest, etc., add any FQDN to the ALB alias and match it with the ListenerRule. Next Steps Originally, we had hoped to offer it in advance to get feedback, but at present it hasn't been used much. We have provided a GUI for this purpose, and we plan to start by having a variety of people use it and receive feedback. However, I think there are many things we can do, such as increasing the number of compatible design patterns and simplifying the associated CICD settings. I would really like to introduce Kubernetes and then move on to AutoProvisioning, which has many applications. |・ω・`) Is that not possible? Impressions To be honest, I tried hard to automatically generate templates using Golang, but gave up because the HCL structure of our in-house design patterns was difficult to analyze and reconstruct. There was some talk internally about this being a reinvention of the console, but if we could get that far, I think we might be able to automate not only the sandbox environment but also the STG environment. For Platform Group, the environment can be created simply by tapping and selecting a few items on the GUI. It's really simple. To be honest, I wanted to reach that level, but I think it was good that I was able to take even the first step. In Kubernetes, I think it might be possible to create something similar by preparing a Helm chart as a template. I would like to consider alternative methods and try various things. Summary The Operation Tool Manager Team oversees and develops tools used internally throughout the organization. As I wrote in my previous O11y article , we organize the mechanisms and present them to application developers so that they can use them on a self-service basis, supporting the creation of value by these developers. A PlatformEngineering meet up was held a little while ago, and it's reassuring to know that this is in line with the direction we're moving forward in. The Operation Tool Manager team also has an in-house tool building department, allowing developers to quickly and intensively create value for their applications. Please feel free to contact us if you are interested in any of these activities or would like to hear from us. @ card
アバター
はじめに こんにちは! 新車サブスク開発G、Osaka Tech Lab 所属の high-g( @high_g_engineer )です。 今回は、社内で立ち上げたフロントエンド勉強会について紹介します。 開催経緯 ある日、新車サブスク開発Gの上長との1on1で、 TSKaigi 2024のタイムテーブル を見せる機会がありました。 それを見た上長からは「幅広いテーマが扱われていて、非常にいい教材だね。これを使って、各部署のフロントエンドエンジニア同士で知見を共有し、横のつながりを作る勉強会があったらいいね」という前向きな提案をもらいました。 その言葉をきっかけに、参加意欲の高そうなフロントエンドエンジニアを募り、社内勉強会の企画がスタートしました。 勉強会の目的 この勉強会では、以下の3つを目的に掲げています。 学習 :外部カンファレンスの発表内容やWeb標準仕様を共有、議論を通じて理解を深める 実践 :学んだ内容を実際のプロダクトに適用する 共有 :実践から得た知見や課題を参加者間で共有し、組織全体のナレッジとして蓄積する 単に知識を得て終わるのではなく、「実際に業務で使える状態になる」ことを目指した実践的な勉強会です。 週に1回、1時間の枠を確保し、読み合わせ・モブプログラミング・ハンズオンなど、内容に応じた形式で実施しています。 主なテーマと発展経緯 2024年9月30日から現在までに34回開催し、以下のような内容を取り組みました。 カンファレンスや技術イベントの知見共有(第1〜17回) まずは、勉強会を定着させるために、 TSKaigi 2024 や JSConf JP 2024 の発表内容を中心に、フロントエンド技術の最新動向を参加者で学習しました。 Prettierの未来を考える (第1回):コードフォーマッターの今後の方向性 TypeScriptのパフォーマンス改善 (第2回):実際の業務コードでの課題と照らし合わせ 全てをTypeScriptで統一したらこうなった! (第3回):フルスタック開発事例 TypeScript化の旅: Helpfeelが辿った試行錯誤と成功の道のり (第5回) TanStack Routerで型安全かつ効率的なルーティング (第7回) Storybook駆動開発 UI開発の再現性と効率化 (第9回) TypeScriptで型定義を信頼しすぎず「信頼境界線」を設置した話 (第10回) mizchiさんによる「LAPRAS 公開パフォーマンスチューニング」 (第12〜13回):外部事例から学ぶパフォーマンス改善 Yahoo! JAPANトップページにおけるマイクロフロントエンド (第15回):大規模組織での開発事例 JavaScript のモジュール解決の相互運用性 (第16回) You Don't Know Figma Yet - FigmaをJSでハックする (第17回) チーム間知見共有と相互理解(第18〜24回) 参加者が増えたことで、個人のスキル共有や各部署のフロントエンドチームの状況共有を実施しました。 これにより、今後の勉強会の方向性を検討するとともに、各チームのプロジェクトをコードベースで確認し、KINTOテクノロジーズ全体のフロントエンド課題を共有できました。 ここまでのふりかえり (第18回):勉強会の方向性を議論 自己紹介 〜これまでのキャリアを添えて〜 (第19回):メンバー間の相互理解促進 各チームのフロントエンド開発状況共有会 (第20〜24回):5回シリーズで各チームの技術スタック、課題、取り組みを詳細に共有 Web標準の理解と実践(第25〜28回) ふりかえり会で、Web仕様を理解したいという声があったため、Baselineを全員で掘り下げながら理解する取り組みをしました。 https://web.dev/baseline?hl=ja Baselineの理解 (第25〜27回):3回シリーズでWeb標準について体系的に学習 Baselineの振り返り&次回以降やること議論 (第28回):学習内容の整理と今後の方向性検討 実践的なパフォーマンス改善(第29回〜現在) mizchiさんの公開パフォーマンスチューニングの動画 を参考にし、実際のプロダクトのパフォーマンスチューニングを実践しました。 https://www.youtube.com/watch?v=j0MtGpJX81E FACTORYパフォーマンス改善 (第29〜30回):2回シリーズで具体的な改善施策と結果を共有 TSKaigi 2025 登壇内容シェア会 (第31回):社内メンバーの登壇内容を事前共有 KINTO ONE パフォーマンス改善 (第32〜34回):3回シリーズでモブプログラミング形式により全員で実際の改善作業を実施、最も実践的な学習を実現 継続的な開催による成果 参加者数の拡大と定着 当初5名から始まった勉強会は、継続開催を通じて、コンスタントに10名以上が参加する勉強会へと成長しました。参加者は新車サブスク開発Gだけでなく、他のグループからも参加者が増えてきて、横の繋がりを実現できています。 初期メンバーの多くが現在も参加を続けており、新しく参加したメンバーも定着率が高いことから、勉強会が参加者にとって価値のある時間になっていることが伺えます。 組織的な技術力の段階的向上 普段、カンファレンスや技術イベントで得た知識は、個人視点での学習範囲にとどまりがちですが、各部署にいる異なる課題感を持ったフロントエンドエンジニアたちで同時に学ぶ事により、組織全体にメリットがあるだけでなく、個人では気づきにくい観点からの学びにもつながっています。 BaselineシリーズでWeb標準について体系的に学習した継続参加メンバーは、これまで「なんとなく」使っていた技術について仕様レベルで理解できるようになり、業務での技術選択においてもより根拠を持った判断ができるようになっています。 実践的な問題解決能力の獲得 直近の勉強会では、KINTO ONE や KINTO FACTORY などの実際のプロダクトを対象に、モブプログラミング形式でパフォーマンス改善に取り組みました。 勉強会で得た知見をそのままプロダクトに適用し、実際に手を動かして確認することで、パフォーマンスチューニングへの苦手意識を払拭。売上向上にもつながる、実践的かつ価値のある取り組みとなりました。 チーム間の技術的連携の深化 開発状況の共有会を通じて、これまで把握しきれていなかった他チームの技術的な取り組みや課題を知る機会が増えました。 その結果、類似した課題を持つチーム同士が勉強会後に個別で相談し合うケースも増加しています。 勉強会は、単なる学習の場にとどまらず、実際の業務課題を解決するためのハブとしても機能し始めています。 まとめ 社内のフロントエンドエンジニア同士の横のつながりを強化する目的で始まったこの勉強会は、外部カンファレンスの知見共有からスタートし、現在では実プロダクトを題材にしたパフォーマンス改善へと発展しています。 今後も継続開催を通じて、つながりを育みながら、組織全体の技術力向上に貢献する場を目指していきます。
アバター
Introduction Hello! I'm Tanachu from the Security & Privacy Group at KINTO Technologies! I usually work on log monitoring and analysis using SIEM, building monitoring systems, and handling cloud security tasks as part of some projects in the SCoE group (here you can read about what's the SCoE group? ) . Here is my self-introduction. In this article, we share a report on our visit to the " Sysdig Kraken Hunter Workshop ," held on March 26, 2025, at the Collaboration Style event space near Nagoya Station. The Event Space Using Sysdig Secure at KINTO Technologies At KINTO Technologies, we mainly use Sysdig Secure for Cloud Security Posture Management (CSPM) and Cloud Detection and Response (CDR). I've put together more details in this blog, so feel free to take a look. A Day in the Life of a KTC Cloud Security Engineer What is the Sysdig Kraken Hunter Workshop? Sysdig is a company founded by Loris Degioanni, the co-developer of the well-known network capture tool, Wireshark. It offers security solutions for cloud and container environments, built around Falco , an open-source standard for cloud-native threat detection developed by Sysdig. We use Sysdig Secure to monitor cloud activities such as permission settings and account or resource creation in cloud environment. The Sysdig Kraken Hunter Workshop is a hands-on session where you run simulated attacks on a demo Amazon EKS environment. You go through a series of modules using Sysdig to practice detection, investigation, and response. If you pass the post-workshop exam, you'll earn a Kraken Hunter certification badge. In this blog, I'll walk you through three modules that stood out the most. Module 1: Simulated Attack and Event Investigation In this module, we carried out a simulated attack on a demo Amazon Elastic Kubernetes Service (Amazon EKS) environment and used Sysdig Secure to detect and investigate the event. First, following the provided documentation, we simulated a remote code execution (RCE) attack on the Amazon EKS demo environment. The simulated actions included: Reading, writing, and executing arbitrary files on the system Downloading files onto the system After running the simulated attack, we accessed the Sysdig Secure console via browser. By checking the status of the targeted resources, we could confirm that Sysdig had detected events related to the attack. Reference: sysdig-aws workshop-instructions-JP Digging deeper, we confirmed that Sysdig Secure had picked up the simulated attack in real time. Reference: sysdig-aws workshop-instructions-JP This hands-on flow let us try out a simulated attack and see exactly how Sysdig Secure handles detection and investigation through its console. By running the attack myself and going through the investigation process with Sysdig Secure, I felt like I got a solid understanding of what the tool is capable of. Module 2: Host and Container Vulnerability Management In this module, we explored Sysdig Secure's features for managing vulnerabilities in both hosts and containers. Since our own products use containers and follow a microservices architecture, this topic is especially relevant to us. Sysdig Secure offers several types of vulnerability scans: Runtime Vulnerability Scanning, Pipeline Vulnerability Scanning, and Registry Vulnerability Scanning. The Runtime Vulnerability Scan lists all containers that have run in your monitored environment in the past 15 minutes, along with all hosts/nodes that have the Sysdig Secure Agent installed. Resources are automatically sorted by severity based on the number and risk level of vulnerabilities, making it easy to spot what needs your attention first. Reference: sysdig-aws workshop-instructions-JP You can also click on any listed item to drill down and view vulnerability details. Reference: sysdig-aws workshop-instructions-JP Pipeline Vulnerability Scan checks container images for vulnerabilities before they're pushed to a registry or deployed to a runtime environment. Registry Vulnerability Scan targets images already stored in your container registry. This way, you can check for vulnerabilities at each phase of the container image lifecycle, from development to production. There are plenty of security tools out there for vulnerability management, but the Sysdig Secure console stood out to me for its sophisticated UI and intuitive usability. Module 3: Container Posture & Compliance Management In this module, we experienced how Sysdig Secure helps manage posture and compliance in cloud environments. As you may have probably seen or heard in the news, misconfigurations in the cloud are a major cause of security incidents. Since we build our products in a fully cloud-native setup, this isn't just others problem—it's something we take seriously. That's why this feature caught our attention. As a posture and compliance management feature, Sysdig Secure allows you to check if your environment complies with common standards like CIS, NIST, SOC 2, PCI DSS, and ISO 27001. Reference: sysdig-aws workshop-instructions-JP It also highlights non-compliant resources and shows you how to fix them. While it's hard to say whether the steps will be practical in every situation, having that guidance readily available does save workloads on researching fixes. As an admin, that's a huge plus. Reference: sysdig-aws workshop-instructions-JP Kraken Hunter Certification Exam The Kraken Hunter certification exam had about 30 to 40 questions on a dedicated web page. The questions covered topics from the workshop, so if you paid attention, you had a solid shot at passing. I struggled a bit with some of the finer details introduced at the start of the workshop, but I managed to pass the exam! Here's the certification badge awarded to those who pass: Kraken Hunter Certification Badge Using Sysdig Secure Going Forward We're exploring and pushing the following ways to get the most out of Sysdig Secure: CSPM: Creating custom policy rules in Rego based on our governance framework to ensure cloud security that aligns with our internal policies. CDR: Building custom rules using Falco to expand threat detection tailored to our environment. CWP: Testing and implementing Cloud Workload Protection (CWP) to secure our container workloads. Summary In the Sysdig Kraken Hunter workshop, we conducted a simulated attack against an Amazon EKS demo environment and got hands-on with Sysdig Secure—detection, investigation, response, and more. Since we've only used a limited set of Sysdig Secure's features at our company, most of what was introduced was new to us. While we fumbled a bit at first, it was a great chance to see what the tool is truly capable of. Joining the in-person workshop also gave us the chance to hear real stories from other companies—their challenges and efforts in the field. Big thanks to the organizers for making this happen. Conclusion Our Security and Privacy Group, along with the SCoE group who joined this workshop, are looking for new teammates. We welcome not only those with hands-on experience in cloud security but also those who may not have experience but have a keen interest in the field. Please feel free to contact us. For more information, please check here .
アバター
はじめに こんにちは、2025年4月入社の hiro です! 本記事では、2025年4月入社のみなさまに、入社直後の感想をお伺いし、まとめてみました。 KINTO テクノロジーズ(以下、KTC)に興味のある方、そして、今回参加下さったメンバーへの振り返りとして有益なコンテンツになればいいなと思います! Minami 自己紹介 7月より新たに発足するデータ戦略部に所属します 所属チームの体制は? 分析、事業戦略/戦術の提案、施策の実行まで一貫して向き合うチームです とても優秀なアナリストやデータサイエンティスト、エンジニアの皆さんと一緒に事業成長に取り組めるので、いまから楽しみです KTCへ入社したときの第一印象?ギャップはあった? 働き方やオフィスでの過ごし方など、自由度の高さに驚きました 落ち着いてる中に密かに熱い気持ちを持ってる方もいらっしゃるので、これからが楽しみです 現場の雰囲気はどんな感じ? チームは比較的若く、仲良く明るい雰囲気です 新しい体制でチームの強みを発揮できるように頑張ります ブログを書くことになってどう思った? 入社前に読んだので、KTCに入社される方の参考になったら良いなと思いました MAoさん ⇒ Minamiさんへの質問 国内国外問わず、おすすめの旅行先があったら教えてください! 昨年行ったハワイのカウアイ島がすごく素敵でした! H.N 自己紹介 業務システム開発部で販売店業務領域を主に担当しています。 最近は室町オフィス近辺の美味しいお店を探すのが趣味です。 所属チームの体制は? 業務システム開発部の中でもNimbusチームに所属してまして、プロパー3人とパートナーさんで日々業務にあたっています。 KTCへ入社したときの第一印象?ギャップはあった? 想像していたよりも社内イベントや食事会への参加機会が多くて、他部署の方と関われるきっかけ作りができそうなので良い意味のギャップでした! 現場の雰囲気はどんな感じ? 忙しい中でも困った事や不明点を聞きやすい雰囲気づくりをしてくださる先輩方が多く、日々キャッチアップするにあたって大変助かっています! ブログを書くことになってどう思った? テックブログも含めてこういった記事を書くことに慣れていないのですが、これを機会に楽しめるようになればいいなと思ってます! Minami ⇒ H.Nへの質問 参加されて面白かった社内イベントを教えてください! まだ社内イベントには参加できてませんが、以下の社内イベントに参加してみたいと思ってます! KTCBeerBash 生成AI系の社内勉強会 部署やチームを横断して交流が深められそうなイベントがあれば! K.S 自己紹介 my routeアプリのUI/UX改善を主に担当しています。趣味は家族でキャンプに行くことです。 所属チームの体制は? 7月から新体制になる予定です。PDMと開発チーム一体となってアプリをよりよくしていきます。 KTCへ入社したときの第一印象?ギャップはあった? 入社したタイミングで神保町オフィスがリニューアルされていてとにかくオフィスが綺麗です。さらに休憩スペースの椅子や机はアウトドアメーカーに統一されていてオシャレ! 現場の雰囲気はどんな感じ? my routeチームは皆さん優しく、困った事は全てサポートしてくれます。歓迎会も素晴らしかったです! ブログを書くことになってどう思った? 過去の先輩方の紹介ブログをよく読むようになり、さらに会社の事を知れて良い機会になりました。 H.Nさん ⇒ K.Sさんへの質問 家族でこれまで行った中でおすすめのキャンプ場や旅先があれば教えてください! 都内から近場&子供&ペット同伴が喜ぶ&高規格という点で TACO GLAMP 毛呂山町ゆずの里オートキャンプ場 みかぼ高原オートキャンプ場 です!他にも沢山あります。 ですが、ぶっちゃけ焚き火さえできれば何処でもOKですw ちる 自己紹介 はじめまして! ちるです。 IT/IS部 コーポレートITグループに所属しています。  これまではWEB系の開発エンジニアとしてのキャリアを積んできたのですが、KTCではコーポレートITとして社内情報システム、販売店様の情シス業務支援など組織を強くしていくためにエンジニアスキルを発揮できるよう日々奮闘しています! 所属チームの体制は? チームはInnovation Driveチームに所属しています!私を含め9名のチームでそれぞれのメンバーが得意領域で活躍しています! チームの目標として「KTCのIT環境を最高のモノにする。」「KTCで生み出された価値を社外に届ける。」といった目標があり、社内だけにはとどまらず「KTCの価値を最大化し対外的な価値に繋げる技術者集団」を目標としているチームになります。 KTCへ入社したときの第一印象?ギャップはあった? 第一印象は、元気な人が多いな! でした 入社してすぐに社内イベントがあったり、定期的に開催されるビアバッシュがあったり交流が多い会社だなって思いました。 ギャップに関してはカジュアル面談や採用面接で色々とお話伺っていた内容がそのままだったので大きなギャップは特になかったです。ただ大企業のグループ会社なので、業務フローなどが固いイメージ、制約などが厳しいと勝手に想像していたのですが、そういった事は特になく、むしろスピード感がすごくて驚きました! 現場の雰囲気はどんな感じ? 所属チームのメンバーは拠点が別なこともあり、全員が揃うのはオンライン上がほとんどですが不思議なくらい距離感を感じないです。 困ったときの相談や質問もSlackやZoom等ですぐコミュニケーション取れるのでとても良い雰囲気で仕事できています。 障害発生時にみんなが「どうした?どうした?」って集まってくる感じや、チームMTGで相談すると意見交換が盛り上がって時間足りなくなることあったりと、とても活発なチームでよいな!と感じています。 ブログを書くことになってどう思った? 入社前からテックブログは読んでいて書くことはわかっていたので、ついに来たか・・・と思いました笑 K.Sさん ⇒ ちるさんへの質問 名古屋オフィスから近くて美味しい定食屋さんあったら教えてください! オフィス近くの柳橋中央市場内にある 天ぷらとワイン小島 揚げたての天ぷら定食おすすめです 名古屋にいらっしゃる際には是非一緒にいきましょ! MAo 自己紹介 IT/IS部 コーポレートITグループに所属しています。主に視える化担当でBI作成しつつ、その前後の業務改善をしたりと現場に近い距離で業務しています。 所属チームの体制は? Innovation Driveチームに所属しており10名ほどのチームです。 KTCへ入社したときの第一印象?ギャップはあった? 周囲の人がすごく話しかけてくれる!! 現場の雰囲気はどんな感じ? それぞれの考えを持ち寄り、より良い解決策を考える雰囲気です。 ブログを書くことになってどう思った? 何を書くのかドキドキしました。 ちるさん ⇒ MAoさんへの質問 最近のマイブーム教えて下さい! 道路を走っている車を眺めながらお茶を飲むこと!「みんな、動いてるなぁ。私も頑張ろう!」と思えます。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTOテクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが増えていきますので、楽しみにしていただけましたら幸いです。 そして、KINTO テクノロジーズでは、まだまださまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら からご確認ください!
アバター
Introduction Hello! My name is K, and I work as a designer at KINTO Technologies. I usually work mainly on UI/UX design for e-commerce sites, but sometimes I also have the opportunity to work on communication design. Back in November 2024, I was responsible for designing the logo that became the face of our internal event, the CHO All-Hands Meeting! Since this was a special opportunity, I would like to take this opportunity to casually introduce the behind-the-scenes aspects of the production and the points I paid particular attention to. What is a Logo? A logo is more than just decoration; it serves as the face of a brand or event. At a glance, people can recognize it and think, "Oh, that's the event!" or "Hey, I've seen this before!" It plays a big role in shaping the impression it leaves on them. Instead of just creating something because it seems cool, I ask myself, "What kind of vibe will this event have?" "What message is it trying to send?" "What impression do I want people to walk away with?" It was a good reminder of how important it is to design with intention. Defining the Concept: Understanding the Core of the Event First, I had a chat with the art director in charge of the event's overall artwork. As we clarified the event's purpose and key message, I started shaping the concept behind the logo. The concept of the CHO All-Hands Meeting event was "initiative" and "connection." A space where everyone proactively connects with their colleagues A place where we truly feel that our company encourages taking initiative An energetic, lively atmosphere with colleagues that fuels our motivation for the next challenge I wanted to capture all of that through a design that feels "free and fun," and had "an energetic vibe that brings people together." Exploring Design Directions The next phase was to find out what kind of visuals would fit the concept. How do I express "a design that feels free and fun," and "an energetic vibe that brings people together."? When I thought about this, one theme came to mind: "otaku culture x technology." The event draws lots of people from development and creative teams. For many of them, things like anime, games, mecha, and manga are not only familiar, but also genuinely exciting. By blending that with a futuristic tech vibe, I felt we could create a world that feels even more open, energetic, and full of positive momentum. Some visual elements we considered were: Elements of mecha, robots, and tokusatsu-inspired details : To bring in that mechanical, industrial edge. Manga and comic-style elements : To experiment with bold, energetic lettering and speech bubble shapes. Digital-style typography : To add a subtle futuristic vibe. I started by sketching out rough ideas by hand, letting the concepts flow freely from there. From Sketch to Digital Once the direction became clear from the sketches, I moved into Illustrator to start digitizing the design. Cleaned up the rough drafts and created the base shape. From there, made several variations, each with subtle tweaks in nuance. Discussed with the art director which design best embodies the concept. Of course, plenty of ideas didn't make the cut—but going through that trial-and-error process really reminded me how essential it is for creating great design. Polishing the Details Once the rough draft was locked in, it was time to move into the final phase. At this stage, I kept refining things, tweaking the details until everything felt just right. And, one of the key elements that really shapes the impression of a logo is the font. For example, rounded fonts can give off a soft, friendly vibe, while sharper fonts feel more sleek and polished. Even small differences like that can completely change the overall tone of the logo. This time, instead of relying on existing fonts, I created an original font. While keeping the event's core themes—autonomy and interaction—in mind, I made adjustments with a focus on the following points: Improve readability : Adjust letter width, proportions, and spacing to make the text easier to read and give the overall design a cohesive feel. Refine curves : Reduce the number of paths to create a smoother, more polished shapes. Harmony between kanji and katakana : Be mindful of consistent shapes so the characters would feel balanced when placed together. When compared to the original red guidelines, the final shape has changed significantly. This process really reminded me how even the smallest design choices, like font style and tiny shape tweaks, can greatly affect the impression a logo gives. Finalizing the Logo: Balancing Playfulness and Versatility And finally—the logo was complete! It strikes a nice balance between playful vibes and practical versatility, making it easy to use in all kinds of contexts. The mix of subculture and tech came through naturally in the design. By sticking to the shapes and fonts, the overall finish and quality really leveled up! Summary Logo design is something where even the tiniest details can totally change the impression it gives. This project reminded me how important it is to keep pushing until you hit that "This is it!" moment. This time, I think I managed to go beyond just making "something that looks cool." I created a design that really "captures the spirit of the event and works across different contexts." If this article sparks even a little inspiration or insight for someone out there, I'll be happy!
アバター
My name is Ryomm and I work at KINTO Technologies developing my route (iOS). Here are some things I've done to save CI credits: Introduction In our project, we use Bitrise as a CI tool. Last year, in addition to regular unit testing, we introduced snapshot testing and moved to SPM . Before we knew it, the time it took for each Bitrise CI run had ballooned to around 25 minutes, and in months when a lot of implementation was happening, we often ended up exceeding our budget. Bitrise becomes expensive if you exceed the contracted amount, so at the exchange rate at the time of writing, it costs about 400 yen for each excess CI run. That's expensive! For this reason, when credits were about to exceed the limit, a trend was created to limit PR merging to the bare minimum in order to avoid moving CI. In order to overcome this situation, we have been working on some credit saving techniques for our project. Reviewing the CLI Tool Setup By looking at the Bitrise Build results, you can see how long each step took. Bitrise Build Result Looking at this, we can see that it took 12 minutes in "Script Runner". This is the step where we set up swiftLint and LicensePlist. As I mentioned in my previous article , the libraries can be downloaded and used in a package created separately from the workspace in order to be executed in the Build Phase. Well, this certainly does take time, so I'll try to shorten it. Fortunately, the library we want to use here is compatible with the Build Tool Plugin, so we can skip this step by transferring it to that library. Since the settings such as license_plist.yml and .swiftlint.yml have already been made, all you need to do is add the package to the project's Package Dependencies and add the plugin to Run Build Tool Plug-ins in the target Build Phase. Build Phase settings Since the location of LicensePlist cannot be specified by outputPath when it is a plugin, you need to include it in BuildPhase so that the license file is moved under Settings.bundle as described in README . Also, the package needs to be included in the app itself, not just a package linked via Frameworks. This completely eliminated the "Script Runner" step, saving me 12 minutes... and cutting my credit costs in half!🎉 Bitrise Build Result Additionally, project configuration has been simplified and there is no longer any need to run separate shells for setup or version updates. In this case, everything was compatible with the Build Tool Plugin, so I changed the configuration, but I also tried nest as a different approach. This allows you to reduce CI time while still managing your existing CLI tools as separate packages. Replace the package for installing the CLI tool under the tools directory with nest. Project/ ├── Hoge.xcworkspace ├── Hoge.xcodeproj ├── Test/ │ └── ... ├── ... └── tools └── nextfile.yaml // ここを置き換える When you run nest bootstrap nestfile.yaml , you can see that the binary is installed in tools/.nest/bin , so set it to be executed in the Build Phase. Configuring swiftlint in the build phase This may be useful if the Build Tool Plugin is not supported. Review the Test In our project, all tests were packed into one test target, so all tests were always run. Furthermore, the snapshot tests were very heavy tests that took about an hour to run, so the method that compared them with a reference image was commented out on the CI so that it would not be executed. However, because asynchronous drawing processes such as waiting before comparison are executed, if the process fails, timeouts accumulate and you have to wait for a long time, which is also one of the factors that eats up credits. Therefore, I separated the snapshot tests that were not running on CI into a separate test target and tried to control the tests that were executed using TestPlan. First, create a test target for the long-running snapshot: Create a test target After configuring the targets using the existing test targets as reference, move the snapshot tests to the newly created target via Target Membership in the Compile Sources under Build Phases or in the File inspector for each test file. In this case, if the moved test file has a dependency on a test file in the original test target, it will not be able to be built, so you will need to separate the dependencies each time. * Change the target * Next, create a TestPlan. A TestPlan is a collection of tests to be run and their configuration. In this case, the tests you want to run can be specified on a test target basis. For this purpose we have created a separate test target. TestPlans can be linked to schemas, and in our app we have a one-to-one relationship between schemas and TestPlans. And in the TestPlan for the schema you want to use on CI, make sure you don't run snapshot tests. TestPlan settings When you actually run it, the execution time doesn't change significantly on CI unless there is a failure. However, the local testing experience has improved significantly. Snapshot tests used to be executed even when only the logic was changed, but now they can be prevented from being executed by simply unchecking the box, resulting in a significant reduction in time. We have decided not to run tests that were not originally run on CI in the first place, but we would like to make it possible to run snapshot tests as well after adjusting the balance with credit usage. Conclusion In addition to the steps introduced here, other measures that can shorten build times include fixing code that takes a long time to infer types and deleting unused assets. These efforts reduced the average time per CI run from about 22 minutes to about 12 minutes, resulting in a savings of about 45% in credits. This time, we focused on reducing the time before and after the build, which is something we can do immediately, but next time we would like to reduce the build time itself even more.
アバター
Introduction Nice to meet you. My name is Yena, and I’m working on Android app development at KINTO Technologies. My career began with Android development, and I have been involved in a wide range of areas, including smart TV apps, web backend and frontend development, and API development. While currently working on Android development, I also plan and run an in-house study group called "Droid Lab" to support the growth of both myself and my team. To share the technical content from the study group more broadly outside the team, we considered publishing the materials as a tech blog, but we faced the following challenges: Since the materials are mainly created in Confluence or PPT format, they need to be converted to Markdown format before being posted on a blog, which makes it impossible to publish them as is and requires significant effort. This conversion work was a significant burden, making it difficult to secure enough time to write blog articles, which led to delays in disseminating information as intended. To solve these challenges, we introduced a mechanism that leverages generative AI ChatGPT's Custom GPT to automatically convert source materials, such as those in Confluence and PPT formats, into Markdown format. We are currently running trial operations, and this mechanism is expected to bring benefits such as reducing the burden of conversion work and improving the efficiency of information sharing. Here, I will explain the creation procedure and settings of the GPT that we actually introduced. How To Create a GPT 1. Creating a GPT After logging in, click "Explore" on the top left. Select "Create GPT" from the My GPT category. Set a Name and profile picture in GPT Builder. DALL·E can also automatically generate an image that matches the name you created (the image below was generated by AI based on the name we created in DALL·E). Setting GPT details 1 Description Here is a brief one- or two-line description of what this GPT does. Example: This GPT automatically generates technical blog articles in Markdown format based on Droid Lab materials. The branch used for creating blog articles should follow this format: (branch name: sample/YYYY-MM-DD- ). 2 Instructions Here, you should describe in detail how GPT should behave, what kind of output it should generate, and any constraints. This is the core of the prompt. Below are the output rules that GPT should follow: Output the Markdown body text below YAML (*In our case, we have a fixed format, so we define it in YAML.) Heading -> ## Heading List -> - Item Inline code -> Code Code block -> Written in open/closed sets such as kotlin Image -> ![description](URL) ⚠️ Do not change the structure, style, or order of sentences. The output must always start with Markdown and be enclosed within a code block. If the internal API URL or a confidential name is included, clearly state "【Non-public information may be included】" and provide suggested corrections. Even if the output is long, do not stop midway and return the entire output in one go. 3 Conversation starters Register example sentences to show how users should use it. Example: Convert the Confluence memo into a technical blog in Markdown format. 4 Knowledge Specify supplementary materials (such as style guides, sample articles, etc.) to be uploaded to the GPT. Internal blog style guide e.g. samples of previously published articles * Be sure to confirm that no confidential information is included before uploading. 5 Capabilities Web Search Canvas DALL·E Code Interpreter -> The function can be turned on and off as needed! Setting GPT details After entering all the information, click the “Create” button in the upper right, and GPT creation will be completed . 2. Setting up GPT Here, I will explain the operating rules and constraints set for GPT. Below are the main functions we have set up to comply with our internal blog format and ensure proper Markdown conversion via GPT: Function Description ✍️ Conversion to Markdown format Converts the pasted text directly to Markdown syntax without changing the style or structure. 📄 Automatic addition of YAML meta information Automatically generates postId, title, and excerpt to match the company's blog format. 🧱 Preservation of syntax and unification of output format Outputs everything in a single code block to prevent Markdown syntax from being broken. 🔐 Security check function Detects internal APIs and internal code names, marks them as 【Possibly confidential information】 and suggests corrections. ⚠️ Output interruption prevention logic Outputs everything in one go, even for long text, without stopping midway to prevent syntax from being broken. ⚠️ Japanese/English check Detects variations in notation, typos, and unnatural expressions, and suggests corrections as necessary. 2-1. 🧠 Detailed specifications of GPT (prompt settings) This GPT functions as a professional tech blog writer and a Markdown converter tool. When a user pastes text from Confluence or an internal memo, it will be output according to the format below, without altering the original structure, order, or style. 1. YAML Meta Information Output Rules To publish it as a blog post, YAML meta information (such as title, date, and category) must be defined at the beginning. This GPT automatically generates a YAML header from the pasted text, according to the following rules: YAML must be output in a yaml code block in the prescribed format. The "title" and "excerpt" fields must be automatically completed by inferring and extracting them from the text. If there is no title, it must be output as 'Article title here' . If the article is very long, Markdown code block must be automatically split into multiple blocks, and output must continue until completion, without the need for the user to prompt by saying "continue.” Automatic Category Judgment Rules If your post contains the following keywords, automatically replace category with the corresponding one: Kotlin, Compose, MVVM, KMP -> "Android" GitHub, CI/CD, CodeBuild -> "DevOps" Lint, architecture, coding standards -> "Architecture" Confluence, Markdown, GPT -> "Tooling" Firebase, AWS, S3 -> "Cloud" Study group, internal sharing, knowledge -> "Team" AI, ChatGPT, Prompt, Natural Language Processing -> "Generative AI" If multiple categories apply, prioritize the category that appears most frequently . 💡 Below is an example of the YAML header format used for our internal blog (it may not be usable in external environments): [Example] --- postId: "<自動生成されたID>" title: "<記事タイトル>" excerpt: "<要約文>" coverTitle: "<表紙用の見出し>" coverImage: "<画像パス(社内ブログ用)>" date: "<ISO形式の日付>" category: "<カテゴリ名>" .... --- 2. Markdown Body Text Output Rules Markdown conversion rules: Heading -> ## Heading List -> - Item Inline code -> Code Code block -> Example: kotlin (※ Be sure to close after opening.) Image -> ![explanation](image URL) ⚠️ Do not change sentence structure, order, or style. ⚠️ Output everything accurately up to the closing tag to avoid breaking the Markdown syntax. 3. Post-Output Automatic Check Function Put all the output (YAML + Markdown body text) in one code block ( Markdown ) . The output must begin with Markdown and end with the corresponding closing code block tag . Even if the text is long, the output must not stop midway, and the entire text must be returned in one go . 3-1. 🔐Security Check Check that the following items are not included: Internal API URL Internal library name Project code name If confidential information such as a customer ID is detected, mark it as “【Possible confidential information】” and present proposed revision for publication at the same time. Detection example (screenshot) 3-2.⚠️ Japanese/English Check Japanese typos, misused particles, awkward phrasing, and inconsistencies in sentence endings, among other issues. English spelling mistakes, grammatical errors, unnatural expressions, etc. If necessary, output the proposed revision under the heading “⚠️ Text check:” However, do not change the order, structure, or style of the sentence in any way; simply point out the mistakes . Detection example (screenshot) 4. Usage Demo 4-1. Execution Steps 💬 Step 1 : Enter the TEXT you want to convert 💬 Step 2 : Paste into custom GPT and send 💬 Step 3 : Output in Markdown format 5. Benefits Gained from the Implementation The implementation of this GPT has brought several benefits, including increased efficiency in Markdown conversion and improved quality of information sharing. Incidentally, ** this article has also been automatically converted to Markdown format using this GPT.** Item Before After Benefit Markdown conversion task 30 minutes to 2 hours or more Tens of seconds to several minutes Reduced workloads by more than 80% Format unification Individual differences exist Automatic, stable output Improved quality and readability Security verification Manual verification Automatic detection and marking Safe to publish Sentence verification Manual verification Automatic detection and marking Safe to publish Users Mainly those familiar with the Markdown format Members who are unfamiliar with the Markdown format can also use it Expanded scope of use 6. Summary We introduced this GPT to reduce the effort involved in Markdown conversion, aiming to make it easier to casually and widely share the knowledge gained at Droid Lab, both within the team and beyond. By leveraging generative AI, time-consuming conversion tasks have been streamlined, enabling us to share information more efficiently while maintaining confidence in both quality and security. Moving forward, we plan to improve usability by adding features like direct PowerPoint uploads and automatic meeting summaries. We aim for a future where knowledge sharing across the entire development team becomes even smoother! 🚀 Potential Applications Addition of Japanese-to-English conversion function Support for global communication and sharing with international members. Support for PPT uploads We are currently developing a system that will eliminate the need for manual copying and allow conversion simply by uploading files. Introduction of GPT for meeting summarization Optimization to automatically extract summaries and ToDos by simply pasting meeting logs and minutes
アバター
The other day, during a casual chat with a colleague, he suddenly said: "AI has come so far in the blink of an eye. I can't even imagine what it'll be like in five years." I'm not an AI expert, but it just so happens I've looked into it a little bit. And the answer to that question isn't as simple as saying "It'll get way better." The thing is, there are some deep-rooted challenges in how the technology actually works. I’m no expert, but here’s my take. I'd like to share what some of these challenges are, and what might help us get past them. Humans as the Bottleneck As you know, generative AI, including Large Language Models, needs a whole lot of data to learn. And by "a whole lot," I mean enormous . That data is collected from publicly available sources through web crawling and scraping, as well as from books, code repositories, and so on. And the key is that all of that content is created by humans . But we humans just aren't fast enough. We can't produce new data at the rate AI is consuming it now. According to a paper by Pablo Villalobos from the research institute Epoch AI , if current trends continue, we could run out of high-quality, publicly available human-generated text data sometime between 2026 and 2032 . In other words, "scaling up with more data" may not work anymore beyond that point. Simply because there isn't enough new human-generated content left to feed these models. Forecasting Human-Generated Public Text and Data Consumption in LLMs Reusing data (a technique called multi-epoch learning ) has some effect, but it's not a fundamental solution. To make matters worse, a lot of the data currently proliferating is of poor quality. For example, spam, social media comments, extremely biased information, misinformation, even illegal content. Also, it's worth pointing out that in languages less commonly used than English, the pace at which human-generated content accumulates is naturally much slower. Therefore, in such languages, the gap between the amount of data created by humans and the data needs of AI could become an even bigger problem. So, what should we do about it? Here are a few of the proposed solutions: Using synthetic data (i.e., data generated by AI itself) for training While this can be effective in some areas, it also comes with a risk of "model collapse." I'll get into the details in the next section. Utilizing non-public data This means using proprietary data held by companies for AI training. This obviously raises serious legal and ethical questions. In fact, some companies, such as the New York Times , have already banned AI vendors from scraping their content. Improving model efficiency Instead of just making models bigger, the idea is to train them to learn smarter . Actually, we're starting to see signs of this shift. When using tools like ChatGPT, we can see something like "reasoning" where the model links multiple steps logically, rather than just recalling memorized information. Inbreeding in Generative Models As mentioned earlier, one way to increase the amount of training data is to generate more of it. But this comes with its own risks. In this paper , Zakhar Shumaylov from the University of Cambridge investigates the question: "What happens when we train a next-generation model on data generated by past AI models, rather than by humans?" The authors point to a dangerous feedback loop called model collapse . When AI-generated data is used over and over again for training, the model gradually drifts away from the original distribution of real-world data. As a result, its outputs become more generic, monotonous, and distorted. In particular, rare and subtle features are more likely to get lost. This mainly happens for two reasons: Statistical errors build up over generations, due to a limited sample Functional errors emerge because the model can't perfectly reproduce complex data distributions Visual images of model collapse Interestingly, keeping just 10% of the original human-generated data can help reduce model collapse to some extent. However, it cannot be completely prevented . Unless we make a deliberate effort to preserve real, human-generated data, AI models will increasingly end up trapped in a narrow, self-reinforcing worldview. It's effectively digital inbreeding . Furthermore, Gabrielle Stein from East Carolina University explored whether this issue could be avoided by using "cross-model learning," in which AIs exchange and learn from each other's data. The conclusion? It didn't really make much of a difference . In her study, she trained models on different proportions of human data: 100%, 75%, 50%, 25%, and 0%. The results showed the following trends: As the proportion of synthetic data increased, linguistic diversity steadily declined No "tipping point" was observed where performance suddenly collapsed at a specific percentage Even a small amount of human data helped slow the rate of degradation She suggests that to avoid early-stage model collapse, at least half of the training data should reliably come from confirmed human-written content. Considering that much of the data we see online is generated by AI, and that most AI training data is scraped from the Internet, it paints a somewhat bleak picture for the future of AI. As AI-generated content increasingly makes its way into training data, the risk of causing model collapse in the future keeps growing. Still, there are some fresh, innovative approaches are emerging that could lead to a breakthrough. What Comes Next? To address the challenges I've covered so far, a few relatively new approaches have started to emerge. While they're not permanent solutions, they might be able to delay the collapse caused by inbreeding and data shortages for a while. One example already mentioned is AI reasoning . This refers to behavior in models like ChatGPT, where the model goes through multiple steps of internal reasoning and judgment before producing a final answer. Another promising method is called Retrieval-Augmented Generation (RAG) . Put simply, this approach lets AI models generate responses not just from their training data, but also by pulling in external documents . For example, feeding a PDF into an LLM or letting it search the Internet before answering a question would fall into this category. That said, as you can probably guess, this doesn't solve the underlying problem of a lack of data . After all, the amount of new, reliable information we can feed into a model is still limited. So, what are promising approaches that haven't been fully realized yet? One example is the trend toward synthetic reality and embodied agents . This is a completely different approach to AI development. Instead of learning passively from static datasets, the idea is to place AI agents in dynamic virtual environments where they act, explore, and adapt to achieve goals. The data obtained is self-generated , produced through experiencing results, testing hypotheses, and planning strategies. It's contextual, diverse, grounded in interaction, and extremely high in quality. This method enables sustainable and self-renewing learning in environments with near-infinite variation. Regardless of the exhaustion of human-written text, it helps avoid the trap of AI being stuck in its own output. ...However, we're not there yet. Sure, we've been successful at offloading all sorts of boring tasks onto AI. But for now, it looks like we still have a fair amount of work to do ourselves. Thanks for reading!
アバター
In 2025, as AI continues to evolve rapidly, being able to effectively use AI has become a key skill for engineers. However, to do so, it is essential to understand prompts appropriately (how to give instructions), requiring experience and knowledge. As the first step in coding with AI, I will introduce you to development utilizing TDD (test-driven development) and AI, which is the theme of this article. Benefits of TDD × AI ✅ Drastically reduced implementation cost! Engineers “only need to write tests” Other than writing tests, they do not need to give complex instructions or prompts. After that, AI automatically generates code. ✅ Development speed skyrockets! Dramatic reduction in detailed back-and-forth communication AI can instantly generate code at each step of TDD, significantly accelerating development efficiency and improving consistency across the codebase. ✅ Exceptional code quality! AI output can be controlled with proper testing Proper testing ensures control over AI generative code. The result is code with fewer bugs. What is TDD? Here is a brief explanation of TDD (Test-Driven Development), which is a fundamental premise. https://www.amazon.co.jp/dp/4274217884 "TDD (Test-Driven Development)" is a methodology proposed by Kent Beck in his book👆 over 20 years ago. By repeating the simple cycle shown in the diagram above — first, write a test, then implement code to pass the test, and finally, refactor — you can produce high-quality, maintainable code. Since testing serves as the starting point for implementation, development can proceed while ensuring a testable structure. Practice TDD :::message This is performed using the Agent mode of GitHub Copilot in VSCode. ::: https://code.visualstudio.com/docs/copilot/copilot-edits#_use-agent-mode-preview With this method, an engineer writes tests and generative AI handles implementation and refactoring, enabling efficient, high-quality development. This time, I will explain the flow of TDD × AI using password validation as an example. Let's start with character count validation . :::message alert It is important to proceed in small steps, without filling in all the specifications from the beginning! ::: Step 1: Creating a Character Count Validation Test Prepare a target password validation class and test files. You can leave both files empty at this stage. Once the files are created, you might be tempted to immediately write the password validation class. But since we’re using TDD, we’ll start by writing tests. @Test("パスワードの文字数が8文字以上16文字以下でバリデーションが成功すること") func testPasswordLengthValidation() async throws { let sut = PasswordValidator() let validPasswordShort = "Valid123" let validPasswordLong = "ValidPassword123" let invalidPasswordShort = "Short01" let invalidPasswordLong = "ThisPasswordIsWay" #expect(sut.validate(validPasswordShort) == true) #expect(sut.validate(validPasswordLong) == true) #expect(sut.validate(invalidPasswordShort) == false) #expect(sut.validate(invalidPasswordLong) == false) } Since the PasswordValidator class and validate method aren't implemented yet, you’ll naturally get an error Next, input a prompt to the AI and ask it to implement. ![input prompt](/assets/blog/authors/HiroyaHinomori/2025_03_tdd_x_ai_02.png =500x) Then, after a few tens of seconds, the AI generates an implementation. struct PasswordValidator { // パスワードの長さの制約を定数として定義 private let minimumPasswordLength = 8 private let maximumPasswordLength = 16 /// パスワードが有効かどうかを検証します /// - Parameter password: 検証するパスワード /// - Returns: パスワードが有効な場合はtrue、そうでない場合はfalse func validate(_ password: String) -> Bool { let passwordLength = password.count return passwordLength >= minimumPasswordLength && passwordLength <= maximumPasswordLength } } Next, let's add a feature to validate the mix of numbers, letters, and symbols. Step 2: Adding Validation of Numbers, Letters, and Symbols @Test("パスワードの文字列の中に数字、文字、記号が含まれている場合にバリデーションが成功すること") func testPasswordCharacterValidation() async throws { let sut = PasswordValidator() let validPassword = "ValidPassword12!" let invalidPasswordNoNumber = "InvalidPassword!" let invalidPasswordNoCharacter = "1234567890!" let invalidPasswordNoSymbol = "InvalidPassword1" #expect(sut.validate(validPassword) == true) #expect(sut.validate(invalidPasswordNoNumber) == false) #expect(sut.validate(invalidPasswordNoCharacter) == false) #expect(sut.validate(invalidPasswordNoSymbol) == false) } At this stage, an error will occur, but don't worry about it; just provide the test case to the AI and let it generate the implementation.. struct PasswordValidator { // パスワードの長さの制約を定数として定義 private let minimumPasswordLength = 8 private let maximumPasswordLength = 16 /// パスワードが有効かどうかを検証します /// - Parameter password: 検証するパスワード /// - Returns: パスワードが有効な場合はtrue、そうでない場合はfalse func validate(_ password: String) -> Bool { let passwordLength = password.count guard passwordLength >= minimumPasswordLength && passwordLength <= maximumPasswordLength else { return false } let hasNumber = password.rangeOfCharacter(from: .decimalDigits) != nil let hasLetter = password.rangeOfCharacter(from: .letters) != nil let hasSymbol = password.rangeOfCharacter(from: .symbols) != nil || password.rangeOfCharacter(from: .punctuationCharacters) != nil return hasNumber && hasLetter && hasSymbol } } Up to this point, we completed the implementation in under 10 minutes . In traditional coding, you need to explicitly define various conditions and specifications in the prompt, but with this method, you simply ask AI to implement something that meets the test conditions . Since all the implementation details have been written in the test, there is almost no need for complex prompt instructions. To Further Streamline Communication with AI If you write implementation rules and constraints in advance in "copilot-instructions.md," there's no need to provide detailed instructions to the AI each time. 日本語で返答してください。 ### コーディングルール - テストはswift-testingを使用してください。 - 実装には基本的にマジックナンバーは使わないこと - DRYの原則に則って実装してください - KISSの原則に則って実装してください - YAGNIの原則に則って実装してください To Become an Engineer Who Thrives in the Age of AI AI is not omnipotent. But that’s no reason to give up! It is important to calmly determine what "AI is good at" and what "humans should handle." With “TDD × AI,” let's understand the coding habits of AI and reach new levels of speed and quality in development!🚀
アバター
Introduction Hello. My name is Shiode, and I do payment-related backend development in the Toyota Woven City Payment Solution Development Group. As mentioned in my previous article , our group uses Kotlin for development, with Ktor as our web framework and Exposed as the ORM. We also adopt Clean Architecture in our code architecture. Initially, we used Kotlin's Result type for error handling, but as the number of developers increased, we started seeing a mix of Result and throw used in code. Mixing throw into code that uses Result defeats the purpose of expressing error handling through types, as it still requires try-catch blocks. Since Kotlin doesn't have Java's checked exceptions, it's easy to forget to call a try-catch block, which can lead to unhandled errors. To improve this situation, we discussed within the team and decided to standardize our error handling using Kotlin's Result type. In this article, I'll walk you through how our group writes error handling in practice. This article does not include the following. Explanation of Clean Architecture Explanation of Ktor and Exposed Comparison between kotlin-result and Kotlin's official Result type Application Directory Structure Before getting into the main topic, I will explain the directory structure of the application in this group. Below is the well-known diagram of Clean Architecture along with our group's directory structure. As we've adopted Clean Architecture, our application's directory structure is generally organized in line with its principles. (Source: The Clean Code Blog ) App Route/ ├── domain ├── usecase/ │ ├── inputport │ └── interactor └── adapter/ ├── web/ │ └── controller └── gateway/ ├── db └── etc The correspondence between our directory structure and the Clean Architecture diagram is as follows: domain directory: entities usecase directory: Use Cases adapter/web/controller directory: Controllers adapter/gateway directory: Gateways The terminology doesn't match exactly, but basically the domain directory sits at the core, the usecase directory surrounds it, and everything under adapter forms the outermost layer. Therefore, the allowed direction of dependency is as follows: usecase -> domain Everything under adapter -> usecase or domain This direction of dependency makes it possible to develop business logic without being affected by factors such as web frameworks or database types. Error Handling Policy Basically, our error handling is based on the following policies: Use Result type instead of throw in case of processing failure When a function returns a Result type, do not use throw When returning an exception, use a custom-defined exception type In the next section, I'll go over each of these policies in more detail, with code examples. Use the Result type When a Function May Fail Since Kotlin doesn't have checked exceptions like Java, there's no mechanism to force the caller to handle errors. By using the Result type, you can explicitly indicate that an error may be returned to the caller, reducing the chances of error handling being missed. However, in cases like Result<Unit> , where the return value isn't used, error handling cannot be enforced unless a custom lint rule is defined. But as of now, we haven't defined one yet. Code Examples Below is a simple code example. When defining a function that performs division, it typically results in an error if the denominator is zero. If a function might fail, specify Result as its return type. In this example, Result<Int> is specified. fun divide(numerator: Int, denominator: Int) : Result<Int> { if (denominator == 0) { return Result.failure(ZeroDenominatorException()) } return Result.success(numerator/denominator) } When Returning an Exception as a Result Type, Wrap the Exception in a Custom-defined Exception Repositories are defined as interfaces in the domain layer, with their implementations residing in the adapter layer. If a use case layer calls a repository function and handles errors, and the adapter layer returns a third-party library exception as is, then the use case layer must be aware of that third-party exception. In that case, the use case layer becomes dependent on the adapter layer. Here's what that looks like in a diagram: ![依存関係](/assets/blog/authors/reona-shiode/error-handling/dependency.png =400x) Interface-based Dependency and Exception-based Dependency (bad example) To avoid this, we always make sure to wrap any exception returned via Result in a custom-defined exception. One tricky point when applying Clean Architecture is deciding which layer exceptions belong to, but I personally think it should be in the domain layer. Our group uses a shared set of custom exceptions across multiple services, so we've extracted them into a separate domain library. Another tricky point with Kotlin's official Result type is that it doesn't allow you to specify the exception type, which means you can't enforce returning only custom exceptions. In cases like this, it may be worth considering the use of kotlin-result . However, we chose not to adopt it in order to avoid introducing third-party types into the domain code. Code Examples Let's say the following interface is defined in the domain layer. data class Entity(val id: String) interface EntityRepository { fun getEntityById(id: String): Result<Entity> } Now, consider a case where a third-party library exposes a method like the one shown below, and it's used as-is. fun thirdPartyMethod(id: String): Entity { throw ThirdPartyException() } Bad Example If the implementation in the adapter layer returns the exception from the third-party library directly. As shown below, this causes it to leak into the caller such as UseCase . class EntityRepositoryImpl : EntityRepository { override fun getEntityById(id: String): Result<Entity> { return kotlin.runCatching { thirdPartyMethod(id) } // This returns the third party exception } } Good Example To prevent third-party exceptions from leaking to the caller, wrap them in a custom-defined exception. class EntityRepositoryImpl : EntityRepository { override fun getEntityById(id: String): Result<Entity> { return kotlin.runCatching { thirdPartyMethod(id) }.onFailure { cause -> // wrap with our own exception CustomUnexpectedException(cause) } } } Avoid Using throw in Functions that Return Result If a function returns a Result type or throws an exception, the caller must handle both. Even if the function author thinks a particular exception doesn't need to be handled by the caller, there may be cases where the caller wants to handle it. For this reason, we have standardized on using the Result type and avoid throwing exceptions explicitly. In some cases such as database connection errors, it might seem acceptable to throw an exception from the adapter layer and let it propagate directly to the API response, since recovery at the use case level is impossible. However, for issues like failure to update the database, we may still want to log inconsistencies with third-party SaaS. In that case, if we scope out using throw, there's a risk that alerts won't be triggered appropriately. We believe it's up to the caller to decide whether error handling is necessary, so even if the function author considers it unnecessary, the exception is returned using a Result. Code Examples Let's take the save function of a repository as an example. The save function receives the entity class and returns the result as a Result<Entity> . Examples of what not to do As shown below, assume that a connection error was thrown, while other errors are returned using the Result type. class EntityRepository(val db: Database) { fun saveEntity(entity: Entity): Result<Entity> { try { db.connect() db.save(entity) } catch (e: ConnectionException) { // return result instead throw OurConnectionException(e) } catch (e: throwable) { return Result.failure(OurUnexpectedException(e)) } return Result.success(entity) } } Now, suppose the use case layer wants to take some kind of action if an error occurs during the save operation. In this case, you must use runCatching (which internally uses try-catch to convert to Result type). class UseCase(val repo: EntityRepository) { fun createNewEntity(): Result<Entity> { val entity = Entity.new() return runCatching { // need this runCatching in order to catch an exception repo.saveEntity(entity).getOrThrow() }.onFailure { // some error handling here } } } Good Example In the good example, all exceptions are wrapped in a custom-defined exception and returned using the Result type. This allows the caller to remove runCatching , simplifying the code. class EntityRepository(val db: Database) { fun saveEntity(entity: Entity): Result<Entity> { try { db.connect() db.save(entity) } catch (e: ConnectionException) { return Result.failure(OurConnectionException(e)) } catch (e: Exception) { return Result.failure(OurUnexpectedException(e)) } return Result.success(entity) } } class UseCase(val repo: EntityRepository) { fun createNewEntity(): Result<Entity> { val entity = Entity.new() return repo.saveEntity(entity).onFailure { // some error handling here } } } Useful Custom Function for Using the Result Type andThen When using a Result type, there are often times when you want to use that value of a successful Result to return a different Result type. For example, updating the status of a specific entity might look like this: fun UseCaseImpl.updataStatus(id: Id) : Result<Entity> { val entity = repository.fetchEntityById(id).getOrElse { return Result.failure(it) } val updatedEntity = entity.updateStatus().getOrElse { return Result.failure(it) } return repository.save(updatedEntity) } In such cases, writing code becomes easier when the operations can be connected by a method chain. While the kotlin-result provides the andThen function for this purpose, Kotlin's official Result type does not. Therefore, our group defined and uses the following method: inline fun <T, R> Result<T>.andThen(transform: (T) -> Result<R>): Result<R> { if (this.isSuccess) { return transform(getOrThrow()) } return Result.failure(exceptionOrNull()!!) } By using this, the previous example can be rewritten as shown below. The result is a bit cleaner, with less repetitive code. fun UseCaseImpl.updataStatus(id: Id) : Result<Entity> { return repository.fetchEntityById(id).andThen { entity -> entity.updateStatus() }.andThen { updatedEntity -> repository.save(updatedEntity) } } doInTransaction for Exposed Our group uses Exposed as the ORMapper. With Exposed, all database operations must be written within the lambda scope called transaction . If an exception is thrown within this transaction scope, it automatically performs a rollback. Since using the Result type avoids throwing exceptions, we created a function that performs a rollback automatically when the Result indicates failure. fun <T> doInTransaction(db: Database? = null, f: () -> Result<T>): Result<T> { return transaction(db) { f().onFailure { rollback() }.onSuccess { commit() } } } Applying this to the previous example of UseCaseImpl , it can be used as follows. fun UseCaseImpl.updataStatus(id: Id) : Result<Entity> { return doInTransaction { repository.fetchEntityByIdForUpdate(id).andThen { entity -> entity.updateStatus() }.andThen { updatedEntity -> repository.save(updatedEntity) } } } respondResult for Ktor Our group uses Ktor as the web framework. A function called respondResult was created to allow Result types from use cases to be returned directly as HTTP responses. suspend inline fun <reified T : Any> ApplicationCall.respondResult(code: HttpStatusCode, result: Result<T?>) { result.onSuccess { when (it) { null, is Unit -> respond(code) else -> respond(code, it) } }.onFailure { // defined below respondError(it) } } suspend fun ApplicationCall.respondError(error: Throwable) { val response = error.toErrorResponse() val json = serializer.adapter(response.javaClass).toJson(response) logger.error(json, error) respondText( text = json, contentType = ContentType.Application.Json, status = e.errType.toHttpStatusCode(), ) } Although it's simple, using this function eliminates the need to call Result.getOrThrow , making the code a bit cleaner. fun Route.route(useCase: UseCase) { val result = useCase.run() call.respondResult(HttpStatusCode.OK, result.map {it.toViewModel()} ) } By the way, respondError is a function that returns an error response from the throwable. We use this function to handle exceptions thrown in the Ktor pipeline and return appropriate responses. We've also created a custom Ktor plugin to handle exceptions. val ErrorHandler = createApplicationPlugin("ErrorHandler") { on(CallFailed) { call, cause -> call.respondError(cause) } } Conclusion I introduced how our group handles errors, along with some helpful custom functions for the Result type. From what I've seen in various tech blogs, many companies seem to use kotlin-result , while there's relatively little information out there on using Kotlin's official Result type. We've found Kotlin's official Result type to be sufficient for error handling, so we encourage you to give it a try!
アバター
Hello! My name is Mayu, and I work as a designer in the Creative Office at KINTO Technologies. I usually focus on UI/UX design for apps, but this time, I was in charge of creating novelty items to distribute at a company event. In this article, I'll share a behind-the-scenes look at the process, from planning to design. I hope this will offer some helpful insights for those involved in novelty production. Novelty Selection The theme is "something that gives a sense of unity" For this event, we aimed to create a novelty item that fosters a sense of unity. We developed ideas based on the following conditions: Creates opportunities to communicate with people you don't usually interact with. Strengthens a sense of unity. Promotes innovation. Appeals to all, regardless of age or gender. Meets the needs of multiple people. Easy for anyone to use immediately. Budget: a few hundred to around a thousand yen per person. Offers lasting value. After considering various ideas, we ultimately decided to produce a " Magnetic Card Stand " and an " Original Name Card ". Reasons for choosing the "Magnetic Card Stand" and "Name Card" Magnetic Card Stand: Placing it on the desk makes it easier to naturally engage with others, promoting communication. Featuring the KINTO Technologies logo and car shape helps foster attachment to the company and boost motivation. Its simple design makes it easy for anyone to use in daily situations. Name Card: Creating name cards with each employee's name makes it easier to approach one another even on first meetings, promoting communication across the company. The cut-off KTC lettering design visually expresses a sense of unity throughout the organization. The cards can be used as name tags during the event and placed on desks afterward for continued use. Production of Magnetic Card Stand **1. Contractor Selection and Request ** We commissioned the production of the magnetic card stand to the original goods specialty website " MOKU ." The deciding factor was MOKU's high level of customization, which made it possible to create original magnetic card stands simply by submitting design data. 2. Prototyping We created a simple paper prototype to check the size and usability. We then placed it on an actual desk to evaluate visibility and practicality. **3. Design of Magnetic Card Stand ** Using Adobe Illustrator, we created a design with the logo positioned on a specified template. The result is a simple design that highlights the KINTO Technologies logo. 4. Design of Instruction Manual To ensure ease of use, we created an original instruction manual. Here too, we used Adobe Illustrator and produced the design data based on the specified template. 5. Data Submission and Delivery The design data was submitted, and delivery was completed in about three weeks! (Order quantity: 500 pieces) Production of Name Card 1. Create a Name Card Design Using Figma, we created an original design featuring names, division, and custom Slack emojis. We made a prototype to ensure it fit properly with the magnetic card stand. The key feature is this half-cut KTC lettering . KTC stands for "KINTO Technologies." The small squares represent employees, symbolizing the idea that "each individual comes together to form KTC." With a simple, stylish black-based design, it also brings out the essence of a tech company. 2. Automatic Data Generation Creating data manually for everyone would have been overwhelming, so we enlisted our in-house engineers to help automatically generate the data in HTML. We built a system that imports employee information from a CSV file and automatically populates it into a template. 3. Printing and Cutting Printed the materials using the office printer and cut them all by hand. It was tough, but it saved a lot of money! lol Project Results and Learnings After distributing the giveaways, we received a lot of encouraging feedback from employees, such as: "It's easier to start conversations now!" "The design is so cute!" "The Slack icon makes it feel even more personal!" The giveaways helped foster a sense of unity, and the project was incredibly rewarding for me as well. This experience also reinforced the importance of not just designing something attractive, but thinking carefully about how it will actually be used. I believe we were able to showcase the true power of purpose-driven design. Finally I hope to apply what I've learned from this project to future design work. If you thought, "KINTO Technologies sounds like a fun place!" please check out our recruitment page ! Looking forward to hearing from you! Thank you for reading!
アバター
EncryptedSharedPreferencesからTink + DataStoreに置き換えた話 こんにちは。Toyota Woven City Payment 開発グループの大杉です。 私たちのチームでは、 Woven by Toyota の Toyota Woven City で使用される決済システムの開発をしており、バックエンドからWebフロントエンド、そして、モバイルアプリケーションまで決済関連の機能を幅広く担当しています。 今回は公式にDeprecatedになってしまったEncryptedSharedPreferencesを実装していたAndroidアプリの置き換えをした話をまとめました。 はじめに EncryptedSharedPreferencesがv1.1.0-alpha07からDeprecatedになり、 Android KeyStoreへの置き換えが公式から推奨 されました。 ![Updates of security-crypto](/assets/blog/authors/osugi/20250616/security-crypto.png =600x) EncryptedSharedPreferencesの代替技術調査 EncryptedSharedPreferencesがDeprecatedとなったことで、永続化手段と暗号化技術の調査を始めました。 永続化手段の選定 私たちのアプリにおけるユースケースでは、設定データの保存にEncryptedSharedPreferencesを使用していただけであったので、SharedPreferencesを使用するだけでも十分ではありました。 ですが、せっかくの置き換えタイミングであったので公式推奨に則り、永続化手段として DataStore を採用しました。 暗号ライブラリの選定 こちらも前述の公式推奨の通り、 Android KeyStore を使用する方針で進めていこうとしたのですが、APIレベルによって機能の制約があるだけでなく、セキュリティレベルの高い実装(StrongBox)を使用するにはデバイスのスペックも関係するため、単純にプログラミングするだけでは想定したセキュリティレベルを担保できない可能性もありました。 今回のアプリは、MDMで管理されたデバイス上で動作する前提であり、StrongBoxにも対応しているデバイスを元々選定していたため、この制約については問題になりませんでした。 また、暗号ライブラリ調査の中で、 Tink というGoogleが提供している暗号ライブラリの存在を知りました。 Tinkのリポジトリ を見ると、マスターキーの保存にAndroid KeyStoreを利用されていることがわかります。 メンテナンスの容易さやパフォーマンスの観点でAndroid KeyStoreとTinkを比較するため、サンプル実装を行いました。 暗号ライブラリの実装比較 Android KeyStoreのStrongBoxとTEEを使用した場合とTinkを使用した場合のサンプルコードを以下にまとめています。 両者とも基本的な実装はそこまで苦労せず着手できたと感じました。 一方で、Android KeyStoreは 暗号アルゴリズムによってAndroid KeyStoreの鍵発行設定を変える必要がある 初期化ベクトル(IV)の管理が開発者依存になる 実装サンプルが少ない Tinkはこの辺りをうまくラップしてくれている良さがあります。 Android KeyStoreを使用した暗号・復号処理の実装例 class AndroidKeyStoreClient( private val useStrongKeyBox: Boolean = false ) { private val keyStoreAlias = "key_store_alias" private val KEY_STORE_PROVIDER = "AndroidKeyStore" private val keyStore by lazy { KeyStore.getInstance(KEY_STORE_PROVIDER).apply { load(null) } } private val cipher by lazy { Cipher.getInstance("AES/GCM/NoPadding") } private fun generateSecretKey(): SecretKey { val keyStore = keyStore.getEntry(keyStoreAlias, null) if (keyStore != null) { return (keyStore as KeyStore.SecretKeyEntry).secretKey } return KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, KEY_STORE_PROVIDER) .apply { init( KeyGenParameterSpec.Builder( keyStoreAlias, KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT ).setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .setIsStrongBoxBacked(useStrongKeyBox) .setKeySize(256) .build() ) }.generateKey() } fun encrypt(inputByteArray: ByteArray): Result<String> { return runCatching { val secretKey = generateSecretKey().getOrThrow() cipher.init(Cipher.ENCRYPT_MODE, secretKey) val encryptedData = cipher.doFinal(inputByteArray) cipher.iv.joinToString("|") + ":iv:" + encryptedData.joinToString("|") } } fun decrypt(inputEncryptedString: String): Result<ByteArray> { return runCatching { val (ivString, encryptedString) = inputEncryptedString.split(":iv:", limit = 2) val iv = ivString.split("|").map { it.toByte() }.toByteArray() val encryptedData = encryptedString.split("|").map { it.toByte() }.toByteArray() val secretKey = generateSecretKey() val gcmParameterSpec = GCMParameterSpec(128, iv) cipher.init(Cipher.DECRYPT_MODE, secretKey, gcmParameterSpec) cipher.doFinal(encryptedData) } } } Tinkを使用した暗号・復号処理の実装例 class TinkClient( context: Context ) { val keysetName = "key_set" val prefFileName = "pref_file" val packageName = context.packageName var aead: Aead init { AeadConfig.register() aead = buildAead(context) } private fun buildAead(context: Context): Aead { return AndroidKeysetManager.Builder() .withKeyTemplate(KeyTemplates.get("AES256_GCM")) .withSharedPref( context, "$packageName.$keysetName", "$packageName.$prefFileName" ) .withMasterKeyUri("android-keystore://tink_master_key") .build() .keysetHandle .getPrimitive(RegistryConfiguration.get(), Aead::class.java) } fun encrypt(inputByteArray: ByteArray): Result<String> { return runCatching { val encrypted = aead.encrypt(inputByteArray, null) Base64.getEncoder().encodeToString(encrypted) } } fun decrypt(inputEncryptedString: String): Result<ByteArray> { return runCatching { val encrypted = Base64.getDecoder().decode(inputEncryptedString) aead.decrypt(encrypted, null) } } } 暗号ライブラリのパフォーマンス検証 Android KeyStoreとTinkの暗号化処理時間のベンチマークを計測しました。 Android KeyStoreについては、 StrongBox と TEE の2つの実行基盤を利用したケースで評価しています。 テストコードでは、共通の暗号化アルゴリズム(AES_GCM)を設定し、10KBのデータを繰り返し暗号化する処理を Microbenchmark を使用して計測しました。Microbenchmarkを使用することで、Google Pixel Tabletの実機上でかつUIスレッド以外のスレッドを利用して計測を行っています。 テストコードは以下です。 import androidx.benchmark.junit4.BenchmarkRule import androidx.benchmark.junit4.measureRepeated import androidx.test.ext.junit.runners.AndroidJUnit4 import androidx.test.platform.app.InstrumentationRegistry import org.junit.Rule import org.junit.Test import org.junit.runner.RunWith @RunWith(AndroidJUnit4::class) class ExampleBenchmark { @get:Rule val benchmarkRule = BenchmarkRule() @Test fun benchmarkTinkEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = TinkClient(context) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } @Test fun benchmarkStrongBoxEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = AndroidKeyStoreClient(context, true) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } @Test fun benchmarkTeeEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = AndroidKeyStoreClient(context, false) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } } 以下に計測結果をまとめました。 暗号化基盤 平均処暗号理時間 (ms) アロケーション数 Android KeyStore (StrongBox) 209 4646 Android KeyStore (TEE) 7.07 4786 Tink 0.573 38 Android KeyStore (StrongBox)およびAndroid KeyStore (TEE)ではハードウェアへのアクセスが発生するため、ソフトウェア側で暗号化処理を行っているTinkと比べてかなり処理に時間がかかっていることがわかります。 今回採用したデバイスはAndroidの中でも比較的スペックが高いものですが、特にAndroid KeyStore (StrongBox)を採用する場合は、UXの検討が必要になりそうです。 備考 ちなみに、実際にAndroid KeyStoreの鍵生成で適用されている実行基盤は以下のコードから判別できます。 val secretKey = generateSecretKey() val kf = SecretKeyFactory.getInstance(KeyProperties.KEY_ALGORITHM_AES, KEY_STORE_PROVIDER) val ki = kf.getKeySpec(secretKey, KeyInfo::class.java) as KeyInfo val securityLevelString = when (ki.securityLevel) { KeyProperties.SECURITY_LEVEL_STRONGBOX -> "STRONGBOX" KeyProperties.SECURITY_LEVEL_TRUSTED_ENVIRONMENT -> "TEE" KeyProperties.SECURITY_LEVEL_SOFTWARE -> "SOFTWARE" else -> "UNKNOWN" } Log.d("KeyStoreSecurityLevel", "Security Level: ${ki.securityLevel}") まとめ EncryptedSharedPreferencesがDeprecatedとなったため、移植先の技術選定を行いました。 公式推奨に則り、永続化手段としてDataStoreを採用しました。 暗号化技術に関してはAndroid KeyStoreとTinkの比較検証を行い、Tinkの方が鍵の発行や暗号化処理が抽象化されていて利用しやすく、また、処理速度も優れていることがわかり、セキュリティ要件としても十分であるためTinkを採用することにしました。 Android KeyStoreを採用する場合は、動作環境のデバイススペックも考慮した実装が求められるため、セキュリティ要件とのバランスを考慮する必要がありそうです。
アバター
はじめに こんにちは、KINTO テクノロジーズ Security CoE グループの多田です。普段は大阪のオフィスで勤務しています。我々のグループでは、マルチクラウド環境の「ガードレール監視とカイゼン活動をリアルタイムで実施する」をミッションに、クラウドセキュリティに関する多くのことにチャレンジしています。メンバが日々、どのような活動を実施しているかは、 こちらのブログ にもまとめていますので、ぜひご覧ください。 背景 世間の LLM ( 大規模言語モデル ) アプリケーション開発の流れに乗り、当社のプロダクトチームでは、多くの LLM アプリケーションを開発しており、PoC やプロダクトレディの状態に進展しています。一方で、クラウドセキュリティを監視するグループとしては、これらのアプリケーションのセキュリティについても適切な対策を講じる必要があります。 当社の LLM アプリケーションは、主に AWS、Google Cloud、Azure 上で開発されており、クラウドベンダーが提供する生成 AI サービスを活用して構築されることが多い状況です。当グループでは Cloud Security Posture Management ( CSPM ) の監視および運用を行っています。しかし、現時点では、生成 AI 関連サービスに特化した CSPM のコントロールは提供されていないのが現状です。例えば、AWS の場合、AWS Foundational Security Best Practices ( FSBP ) や Center for Internet Security ( CIS ) では、生成 AI サービスに関する直接的なコントロールが提供されていません。 そこで当グループでは、各クラウドベンダーの生成 AI サービスを利用して LLM アプリケーションを開発する際に遵守すべきガイドラインを策定しました。このガイドラインには、クラウドベンダーが提供する生成 AI 関連サービスの利用推奨や設定方法についても記載しています。言い換えれば、ガイドラインに記載した設定方法は、CSPM として監視すべきコントロールとなります。さらに、コントロールを Rego 言語 で実装し、AWS Bedrock の CSPM として運用を行っています。 タイトルにある、 AI-SPM は、AI Security Posture Management の略で、「 AI や機械学習 ( ML ) 、生成 AI モデルなどの AI 関連資産のセキュリティリスクやコンプライアンスリスクを可視化・管理・軽減するためのソリューション」というような定義がされているようなので、今回の取組みについても、あえて、AI-SPM という名前にしてみました。 LLM アプリケーションで遵守すべきセキュリティガイドライン ガイドラインを作成するにあたっては、 OWASP Top 10 for LLM Applications 2025 を参考にしました。おそらく、LLM アプリケーションのセキュリティを語るうえでは、最早、鉄板ともいうべき資料かと思います。この OWASP の資料では、LLM アプリケーションでよく見られる最も重大な脆弱性 Top 10 がリストされており、「概要」「脆弱性の例」「防御や軽減策」「攻撃シナリオ」などが記載されています。これらの内容を基に、各クラウドサービスで LLM アプリケーションを開発する場合、利用するサービス、機能の選定やベストプラクティスについて検討を行いました。 Top 10 の最初に記載のあるリスクは「 LLM01:Prompt Injection 」です。 このリスクは、 悪意ある入力によって LLM の振る舞いが意図せず操作され、情報漏洩や不正な動作を引き起こすリスク です。このリスクの予防・緩和策としては、 LLM への入力の検証・フィルタリング が有効となります。 そして、この予防・緩和策をクラウドサービス上で実装する場合にどうすべきかというと、AWS であれば、 Amazon Bedrock Guardrails に Prompt Attack をフィルタする機能があるので、この機能を有効化することが対策となります。あとは、CSPM のコントロールとして、この機能が有効化されているかどうかをチェックすることで、可視化とカイゼンが実施できるようになります。 以下に、Top 10 の中の代表的なリスクと AWS サービスでの「予防・緩和策」と「 CSPM コントロールとしての実装」をまとめますので参考にしてください。 Top 10 リスク概要 予防・緩和策 AWSでの実装 CSPM コントロールの実装 LLM01: Prompt Injection 悪意ある入力 ( プロンプト ) によって LLM の振る舞いが意図せず操作され、情報漏洩や不正な動作を引き起こすリスク モデル動作の制約、入力・出力の検証、フィルタリング、権限制御、人間の承認導入など Amazon Bedrock Guardrails の content filters「Prompt attacks」を利用する Amazon Bedrock Guardrails の Prompt attacks Configure prompt attacks filter を有効化 し、 Block アクション かつ 閾値が HIGH に設定されていることを確認する LLM02: Sensitive Information Disclosure モデルの応答や挙動から、個人情報や機密データなどのセンシティブな情報が漏洩するリスク 出力の検証・フィルタ、学習データの管理、アクセス制御 Amazon Bedrock Guardrails の「sensitive information filters」を利用する Amazon Bedrock Guardrails Sensitive information filters が Output で 有効化 されていることを確認する LLM06: Excessive Agency LLM やそのエージェントに過剰な自律性や権限を与えることで、意図しない行動や操作が発生するリスク 最小権限の徹底、人間の承認、権限監査 Amazon Bedrock Builder tools の「Agent」を利用する Amazon Bedrock Builder tools の Agents を利用する場合は、 Guardrail details が 関連付けられている ことを確認する LLM09: Misinformation LLM が誤情報やバイアスを含む出力を生成し、ユーザーや社会に悪影響を及ぼすリスク 多様かつ信頼性のあるデータで学習、ファクトチェック、出典表示 Amazon Bedrock Guardrails の「contextual grounding check」を利用する Amazon Bedrock Guardrail の contextual grounding check が 有効化 されていることを確認する LLM10: Unbounded Consumption LLM のリソース消費が制御されず、DoS やコスト増大、サービス停止などを招くリスク、リクエストや計算資源の無制限利用が原因 リソース制限、クォータ設定、利用状況の監視 Amazon Bedrock 「Model invocation logging」を利用する Amazon Bedrock の Model invocation logging が 有効 になっていることを確認する 上記の内容については、5 月に実施した共催イベント Cloud Security Night #2 で登壇していますので、 こちらの資料 も参考していただければと思います。 Rego による CSPM コントロールの実装 CSPM のコントロールの定義はできたので、コントロールの内容をチェックする仕組みを開発していきます。当グループでは、CSPM の運用に、AWS であれば、Security Hub、Google Cloud であれば、 Sysdig 、Azure であれば Defender for Cloud を利用しています。もちろん、統合したツールを使う方が良いのでしょうが、コンソールをゴリゴリ使うというよりは、それぞれ、API 等を通じて、CSPM のアラート状況を確認し、必要に応じてSlack 通知するなどしていますので、ツールが統合されてないことに不自由は感じていません。 LLM アプリケーションの CSPM コントロールについては、Sysdig の CSPM 機能で利用されている Rego で開発することにしました。Rego を採用した理由は、OSS であることと CSPM のようなクラウドインフラの設定の判定ロジックを記載するのであれば、それほど学習コストも必要なく開発できると思ったからです。 以下が LLM01:Prompt Injection コントロールを Rego で実装したものになります。やってることは、 risky == true ( リスクあり ) をデフォルト値に設定し、Bedrock Guardrails の設定 ContentPolicy.Filters の値が Type == PROMPT_ATTACK と InputStrength==HIGH であれば、 Prompt attack が有効化され、閾値が High に設定されているとして、 risky == false とし、リスクなしと判断しています。 default risky := true risky := false if { some filter in input.ContentPolicy.Filters lower(filter.Type) == "prompt_attack" lower(filter.InputStrength) == "high" } この Rego を Sysdig のカスタムコントロールとして、Sysdig にデプロイする必要があります。詳細なカスタムコントロールの作成方法やデプロイ手順については、Sysdig 公式ブログで紹介されていますので、 こちら を参考にしてください。我々もこちらを参考にすることで開発を進めました。本ブログに記載していませんが、カスタムコントロールを作成するにあたっての多くのノウハウも貯まりました。 カスタムコントロールは、Sysdig に terraform としてデプロイする必要があります。以下が最終的に作成したカスタムコントロールの main.tf になります。 terraform { required_providers { sysdig = { source = "sysdiglabs/sysdig" version = ">=0.5" } } } variable "sysdig_secure_api_token" { description = "Sysdig Secure API Token" type = string } provider "sysdig" { sysdig_secure_url="https://app.us4.sysdig.com" sysdig_secure_api_token = var.sysdig_secure_api_token } resource "sysdig_secure_posture_control" "configure_prompt_attack_strength_for_amazon_bedrock_guardrails" { name = "Configure Prompt Attack Strength for Amazon Bedrock Guardrails" description = "Ensure that prompt attack strength is set to HIGH for your Amazon Bedrock guardrails. Setting prompt attack strength to HIGH in guardrails helps protect against malicious inputs designed to bypass safety measures and generate harmful content." resource_kind = "AWS_BEDROCK_GUARDRAIL" severity = "High" rego = <<-EOF package sysdig import future.keywords.if import future.keywords.in default risky := true risky := false if { some filter in input.ContentPolicy.Filters lower(filter.Type) == "prompt_attack" lower(filter.InputStrength) == "high" } EOF remediation_details = <<-EOF ## Remediation Impact This control will help you ensure that your Amazon Bedrock guardrails are configured with high prompt attack strength, which is crucial for protecting against malicious inputs designed to bypass safety measures and generate harmful content. ## Remediation Steps 1. Navigate to the [Amazon Bedrock console](https://console.aws.amazon.com/bedrock/home). 2. Select the guardrail you want to modify. 3. In the guardrail settings, locate the "Content Policy" section. 4. Ensure that the "Prompt Attack" filter is set to "High" for the "Input Strength". 5. Save the changes to the guardrail configuration. 6. Repeat this process for any other guardrails in your AWS environment. ## Remediate Using Command Line You can use the AWS CLI to update the guardrail configuration. Run the following command to set the prompt attack strength to HIGH for a specific guardrail: ```bash aws bedrock update-guardrail --guardrail-id <guardrail-id> --content-policy '{"Filters": [{"Type": "prompt_attack", "InputStrength": "high"}]}' ``` Replace `<guardrail-id>` with the ID of your guardrail. Repeat this command for other guardrails in your AWS environment. ## Additional Information For more information on configuring Amazon Bedrock guardrails, refer to the [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html). ## Contact Information Slack Channel: #security-coe-group EOF } 実は当初、Sysdig は、CSPM の対象リソースとして Amazon Bedrock をサポートしていませんでした。Sysdig に相談したところ、ものすごい速さで対応いただき、今回の取組みに繋がりました。機能的な部分の満足度もありますが、この辺りのスピード感は、Sysdig ユーザとして非常に心強いです。 あとは、同じ要領で、 LLM アプリケーションで遵守すべきセキュリティガイドライン に記載したコントロールをいくつか Rego で実装し、Sysdig にデプロイしています。 Sysdig による AI-SPM 運用 デプロイしたいくつかのカスタムコントロールは、カスタムポリシーとして定義し、AWS Bedrock リソースを可視化しています。可視化の結果、問題があれば、カイゼンするという運用になります。当社の場合、当グループからプロダクト開発グループに対して、カイゼン依頼を行いますが、その時に、カイゼン方法と合わせて、カイゼン依頼をするようにしています。 下の画面は、Sysdig コンソール画面で、 LLM01:Prompt Injection の CSPM コントロールを確認している画面となります。コントロール名は、 Configure Prompt Attack Strength for Amazon Bedrock Guardrails です。名前は、それっぽく、こちらで付けています。状況としては、AWS Bedrock Guardrail のリソースが 3 つ存在し、1 つが Failling 、残り 2 つが Passing していることを示しています。 上記画面をドリルダウンすることで、カイゼンの影響やカイゼン方法などを参照することができます。こちらの内容も main.tf に記載した内容が反映されています。 ただ、実際の運用では、Sysdig コンソール画面にアクセスすることはそれほどなく、アラートの確認等はSysdig API を経由して確認するようにしています。 まとめ 今回は、LLM アプリケーションのセキュリティ対策として、Amazon Bedrock の 設定のチェックロジックを Rego で開発し、Sysdig で運用する方法についてご紹介しました。ガイドラインでは、Amazon Bedrock だけでなく、Azure AI Foundry や Google Cloud Vertex AI も整理しているため、今後は、同様に Rego の開発、Sysdig による運用を進めていきます。 また、従来の CSPM に加え、AI-SPM では、クラウドインフラ全体のセキュリティに留まらず、AI 固有の課題やデータ資産の保護など、従来の CSPM ではカバーしきれない領域にも取り組む必要があります。AI 技術は急速に進化しており、最近では MCP や A2A などの新しい概念が登場しています。これらの進展に対応したセキュリティ対策を推進していくことも重要です。 今後も新しい技術や課題に追随しながら、AI アプリケーションのセキュリティを強化する取り組みを続けていきます。 さいごに Security CoE グループでは、一緒に働いてくれる仲間を募集しています。クラウドセキュリティの実務経験がある方も、経験はないけれど興味がある方も大歓迎です。お気軽にお問い合わせください。 詳しくは、 こちらをご確認ください。
アバター
こんにちは。KINTOテクノロジーズのクリエイティブ室でデザイナーをしている桃井( @momoitter )です。 この記事では、Midjourney v7に新たに搭載された「Omni-Reference」機能を使って、オリジナルキャラクター「しぇるぱ」の見た目を統一したプロセスをご紹介します。 この記事はこんな方におすすめです Midjourney v7のOmni-Referenceについて詳しく知りたい キャラクターの一貫性を保った画像生成に挑戦したい AIツールで高クオリティなビジュアル制作をしたい 実際のプロンプトや生成画像も交えながら、Omni-Referenceの活用ポイントや調整のコツを解説していきます。 Omni-Referenceとは? 2025年5月、Midjourney v7に「Omni-Reference」という待望の機能が登場しました。 これは、特定の画像を参照しながら一貫性のあるビジュアルを維持しつつ、新しい画像を生成できる機能です。 これまでv7では難しかった「キャラクター性の維持」が可能になり、人物だけでなく物体や乗り物にも対応しています。 また、「Omni Strength(ow)」というパラメータによって、どの程度参照画像に忠実に生成するかを数値で調整できます。 「しぇるぱ」について しぇるぱとは? 2024年11月、KINTOテクノロジーズの社内イベント「超本部会」のオープニングムービーに、AIで生成されたキャラクター「しぇるぱ」が登場しました。 その後、エンジニア向けのイベントや人事採用イベントなどで、KINTOテクノロジーズを説明するナビゲーター的存在として活躍しています。 誕生のエピソードはこちら https://blog.kinto-technologies.com/posts/2025-03-07-creating_a_mascot_with_generative_AI/ なぜ見た目を統一したかったのか? しぇるぱの誕生以降、画像生成AIや動画生成AIは急速に進化しました。 その結果、当初のビジュアルはやや古さを感じさせるものになってしまいました。 ちょうどその少し後に、Midjourney v7やRunway Gen-4がリリースされ、生成クオリティが飛躍的に向上。「しぇるぱ」のビジュアルもこのタイミングでアップデートすることにしました。 アップデートの過程はこちら https://blog.kinto-technologies.com/posts/2025-06-06-ai-character-movie-making/ ただし、v7の表現力は非常に高い一方で、リリース当時は同じキャラクターの見た目を一貫して再現する機能がなく、生成するたびに微妙に顔が異なるという課題がありました。 しかもアップデートによりクオリティが上がった分、初期のしぇるぱが持っていた「キャラクターとしての親しみやすさ」が薄れてしまうという副作用も。 そんな中、2025年5月、Midjourney v7に「Omni-Reference」という新機能が追加されました。 私はこの機能を使って、元の見た目を活かしながら、v7らしい美しさとリアルさを兼ね備えた「進化したしぇるぱ」の再構築に挑戦しました。 いざ実践! ChatGPTに相談 まずはChatGPTにこう相談しました。 「Midjourney v7のOmni-Referenceに添付の画像をアップして、このピンク髪の女性のキャラクターの見た目を固定しながら、クリーンな背景を付けた正面向きの画像をMidjourneyで生成したいです。Midjourney用のプロンプトを教えてください。」 そうすると、このようなプロンプトが返ってきました。 upper body portrait of a female virtual operator in a clean futuristic interior, facing camera directly, symmetrical composition, looking straight ahead, centered, gentle expression, slightly relaxed face, subtle smile, brightly illuminated face, soft front lighting, high key lighting on the face, studio-style lighting setup, clear and vivid facial features, softly lit background, minimalistic sci-fi control room, white and silver tones, crisp details Omni-Referenceの使い方 Step1:画像をドラッグ&ドロップ 参照させたい画像を、Midjourneyのプロンプト入力欄へドラッグすると、画面上部のバーに「Omni-Reference」が表示されるので、そこにドロップ。 Step2:Omni Strength(ow)の調整 「Omni Strength」では、一貫性の強度(=元画像への忠実度)を調整できます。数値はowというパラメータで指定します。 Step3:生成開始 ChatGPTから取得したプロンプトを入力して、生成スタート! 「ow」値による変化 owが低い(例:100〜200) → 元画像とはあまり似ないが、Midjourney特有の繊細で美しい描写が得られる owが高い(例:800〜1000)→ 元画像には似るが、引っ張られすぎてMidjourneyらしさが失われてしまう。 ow100 ow200 ow400 ow600 ow800 ow1000 最適なバランスを探して 試行錯誤の末、「しぇるぱ」アップデートにはow 200〜400が最適という結論に。 このあたりの数値だと、元の面影を保ちつつもMidjourneyらしさのある美しい描写が可能でした。 ある瞬間、「これだ!」と思える1枚が現れました。 元のしぇるぱの面影を残しつつ、Midjourneyらしい美しさと繊細さもある理想のビジュアルです。 シーン展開と世界観づくり 決定画像をベースに、Omni-Referenceで構図や背景を展開していきました。 ChatGPTにも再度相談し、シーンのアイデアやMidjourney用のプロンプトを取得。 しっかりしたベースがあると、それに沿った世界観の展開もスムーズでした。 こうしてアップデートした「しぇるぱ」の新しい見た目は、会社紹介動画の冒頭にも使われています。 https://www.youtube.com/watch?v=8Df_0StDAiw 応用:社内の他コンテンツにも展開 さらに先日の社内勉強会の登壇資料でも、しぇるぱを活用。 Omni-Referenceのおかげで「似てる・似てない」を気にする手間が省け、資料やイベントビジュアルなどにもスムーズに導入できるようになりました。 実践でわかったコツ Omni-Referenceは非常に強力な機能ですが、その分「参照が強すぎる」こともあります。 服装まで自由にしたい → 顔のみの画像を添付 髪型も変えたい → 目元中心など、必要最低限の情報だけにする このように参照範囲を調整することで、顔の一貫性を保ちつつも、Midjourneyらしい自由な表現の恩恵を受けることができます。 まとめ:誰でも「しぇるぱ」を作れる時代へ Omni-Referenceの登場によって、キャラクターのビジュアルを一貫して保ちながら高クオリティなシーン展開ができるという制作環境が整いました。 これはつまり、「しぇるぱ」のような存在を、誰でも再現できる時代が来たということ。 私ひとりの中に閉じこめておくのはもったいない。 だからこそ、これからは社内の皆の創造性でしぇるぱを育てていけたらと思っています。 表現を拡張し、「しぇるぱ」をもっと羽ばたかせていきたい。 AI技術の進化とともに、「しぇるぱ」も進化を続けていきます。
アバター
こんにちは。KINTOテクノロジーズのクリエイティブ室でデザイナーをしている桃井( @momoitter )です。 この記事では、Midjourney v7に新たに搭載された「Omni-Reference」機能を使って、オリジナルキャラクターの見た目を統一したプロセスをご紹介します。 この記事はこんな方におすすめです Midjourney v7のOmni-Referenceについて詳しく知りたい キャラクターの一貫性を保った画像生成に挑戦したい AIツールで高クオリティなビジュアル制作をしたい 実際のプロンプトや生成画像も交えながら、Omni-Referenceの活用ポイントや調整のコツを解説していきます。 Omni-Referenceとは? 2025年5月、Midjourney v7に「Omni-Reference」という待望の機能が登場しました。 これは、特定の画像を参照しながら一貫性のあるビジュアルを維持しつつ、新しい画像を生成できる機能です。 これまでv7では難しかった「キャラクター性の維持」が可能になり、人物だけでなく物体や乗り物にも対応しています。 また、「Omni Strength(ow)」というパラメータによって、どの程度参照画像に忠実に生成するかを数値で調整できます。 「社内向け生成AIツール」について 社内向け生成AIツールとは? 2024年11月、KINTOテクノロジーズの社内イベント「超本部会」のオープニングムービーに、AIで生成されたキャラクターが登場しました。 その後、エンジニア向けのイベントや人事採用イベントなどで、KINTOテクノロジーズを説明するナビゲーター的存在として活躍しています。 誕生のエピソードはこちら https://blog.kinto-technologies.com/posts/2025-03-07-creating_a_mascot_with_generative_AI/ なぜ見た目を統一したかったのか? キャラクターの誕生以降、画像生成AIや動画生成AIは急速に進化しました。 その結果、当初のビジュアルはやや古さを感じさせるものになってしまいました。 ちょうどその少し後に、Midjourney v7やRunway Gen-4がリリースされ、生成クオリティが飛躍的に向上。オリジナルキャラクターのビジュアルもこのタイミングでアップデートすることにしました。 アップデートの過程はこちら https://blog.kinto-technologies.com/posts/2025-06-06-ai-character-movie-making/ ただし、v7の表現力は非常に高い一方で、リリース当時は同じキャラクターの見た目を一貫して再現する機能がなく、生成するたびに微妙に顔が異なるという課題がありました。 しかもアップデートによりクオリティが上がった分、初期のオリジナルキャラクターが持っていた「キャラクターとしての親しみやすさ」が薄れてしまうという副作用も。 そんな中、2025年5月、Midjourney v7に「Omni-Reference」という新機能が追加されました。 私はこの機能を使って、元の見た目を活かしながら、v7らしい美しさとリアルさを兼ね備えた「進化したキャラクター」の再構築に挑戦しました。 いざ実践! ChatGPTに相談 まずはChatGPTにこう相談しました。 「Midjourney v7のOmni-Referenceに添付の画像をアップして、このピンク髪の女性のキャラクターの見た目を固定しながら、クリーンな背景を付けた正面向きの画像をMidjourneyで生成したいです。Midjourney用のプロンプトを教えてください。」 そうすると、このようなプロンプトが返ってきました。 upper body portrait of a female virtual operator in a clean futuristic interior, facing camera directly, symmetrical composition, looking straight ahead, centered, gentle expression, slightly relaxed face, subtle smile, brightly illuminated face, soft front lighting, high key lighting on the face, studio-style lighting setup, clear and vivid facial features, softly lit background, minimalistic sci-fi control room, white and silver tones, crisp details Omni-Referenceの使い方 Step1:画像をドラッグ&ドロップ 参照させたい画像を、Midjourneyのプロンプト入力欄へドラッグすると、画面上部のバーに「Omni-Reference」が表示されるので、そこにドロップ。 Step2:Omni Strength(ow)の調整 「Omni Strength」では、一貫性の強度(=元画像への忠実度)を調整できます。数値はowというパラメータで指定します。 Step3:生成開始 ChatGPTから取得したプロンプトを入力して、生成スタート! 「ow」値による変化 owが低い(例:100〜200) → 元画像とはあまり似ないが、Midjourney特有の繊細で美しい描写が得られる owが高い(例:800〜1000)→ 元画像には似るが、引っ張られすぎてMidjourneyらしさが失われてしまう。 ow100 ow200 ow400 ow600 ow800 ow1000 最適なバランスを探して 試行錯誤の末、アップデートにはow 200〜400が最適という結論に。 このあたりの数値だと、元の面影を保ちつつもMidjourneyらしさのある美しい描写が可能でした。 ある瞬間、「これだ!」と思える1枚が現れました。 元のキャラクターの面影を残しつつ、Midjourneyらしい美しさと繊細さもある理想のビジュアルです。 シーン展開と世界観づくり 決定画像をベースに、Omni-Referenceで構図や背景を展開していきました。 ChatGPTにも再度相談し、シーンのアイデアやMidjourney用のプロンプトを取得。 しっかりしたベースがあると、それに沿った世界観の展開もスムーズでした。 こうしてアップデートした新しい見た目は、会社紹介動画の冒頭にも使われています。 https://www.youtube.com/watch?v=8Df_0StDAiw 応用:社内の他コンテンツにも展開 さらに先日の社内勉強会の登壇資料でも、オリジナルキャラクターを活用。 Omni-Referenceのおかげで「似てる・似てない」を気にする手間が省け、資料やイベントビジュアルなどにもスムーズに導入できるようになりました。 実践でわかったコツ Omni-Referenceは非常に強力な機能ですが、その分「参照が強すぎる」こともあります。 服装まで自由にしたい → 顔のみの画像を添付 髪型も変えたい → 目元中心など、必要最低限の情報だけにする このように参照範囲を調整することで、顔の一貫性を保ちつつも、Midjourneyらしい自由な表現の恩恵を受けることができます。 まとめ:誰でもオリジナルキャラクターを作れる時代へ Omni-Referenceの登場によって、キャラクターのビジュアルを一貫して保ちながら高クオリティなシーン展開ができるという制作環境が整いました。 これはつまり、オリジナルキャラクターのような存在を、誰でも再現できる時代が来たということ。 私ひとりの中に閉じこめておくのはもったいない。 だからこそ、これからは社内の皆の創造性でキャラクターを育てていけたらと思っています。 表現を拡張し、オリジナルキャラクターをもっと羽ばたかせていきたい。 AI技術の進化とともに、進化を続けていきます。
アバター