TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

"I’d love to create a video with this kind of worldview…" What do you do when you feel that way? Without any hesitation, I decided to leave it all up to ChatGPT. Hello. My name is Momoi ( @momoitter ), and I’m a designer in the Creative Office at KINTO Technologies. This article summarizes my process of using AI tools such as ChatGPT, Midjourney, and Runway to create the visuals of a pink-haired virtual character named "KTC AI WEB," almost entirely through conversation. Even if you don’t have any specialized skills or much time, all you need is an idea of the kind of video you’d like to create. This article is written for people who’d like to experience the process of gradually turning an image into a tangible form together with AI. First, please take a look at the completed video. https://www.youtube.com/watch?v=GH9CdNqTyHQ It All Started with the "Renewal" of a Character This character, KTC AI WEB, was originally created for the company’s internal event, "CHO All-Hands Meeting," in November 2024. ! Please take a look at this article for the creative process. https://blog.kinto-technologies.com/posts/2025-03-07-creating_a_mascot_with_generative_AI/ This character used cutting-edge AI technology at that time, and attracted a lot of attention within the company. …But only four months have passed since then. At that time, I thought image generative AI and video generative AI were amazing and had already progressed so much. Though looking at her now, she feels a little outdated. So, I thought to myself, "I will take this as an opportunity to use the latest technology to upgrade this character’s world," and decided to start rebuilding it together with ChatGPT. Step 1: Share the Worldview and Generate an Image The first thing I did was share the character and worldview. I uploaded the image of KTC AI WEB that I had originally generated to ChatGPT and told it the following: This character was created using slightly outdated image generative AI technology, so I would like to update her appearance. She has a setting called "Virtual Agent." Please expand on the worldview based on that setting, and propose scene variations and prompts that can be expressed in Midjourney v7. The reason I chose Midjourney was because I felt that the accuracy and texture of the character depictions had significantly improved since the update to v7. I thought it would be perfect for a situation like this one, where I wanted to upgrade the look of an existing character. Right after that, I received a series of specific situation ideas and corresponding prompts such as, "With that worldview, how about a scene like this?" It felt like I was brainstorming with a film director. When I typed the prompts into Midjourney, the visuals that were generated one after another went far beyond my imagination, and I was amazed at how expressive they were. When I first started making this video, Midjourney v7 did not have features like "Omni-Reference" to maintain character consistency yet. So, I made an effort to make the prompts look consistent by consciously including the easily recognizable characteristic of "short pink hair" in them. If something different from what you imagined comes up, just tell ChatGPT things like "Get a littler closer to her face," or "Make the background brighter and cleaner," and it will instantly output a readjusted prompt. Step 2: Generate a Video from an Image Once you have generated an image that you like, you can attach it to ChatGPT and make a request as seen below. This is an image that was generated in Midjourney based on the prompt for the proposed scene. I’d like to set this image as the first frame of Runway’s Gen-4 keyframe feature and generate a video. Please generate a prompt that adds some movement to make this scene more appealing. ChatGPT reads the content of the image and creates a Runway prompt that maximizes its appeal. The reason I used Runway was because, with the advancement to Gen-4, I felt that it could animate the image without compromising Midjourney’s high-definition appeal. I uploaded the image generated by Midjourney to Runway Gen-4’s image to video. By pasting the prompt output by ChatGPT, a high-quality video was generated that brought out the image’s worldview to the fullest. If the image of the character or camera movement is different from what you imagined, simply tell ChatGPT, "The generated video was like this, so I’d like to change this part like this," and it will re-suggest prompts. Step 3: Select Background Music with ChatGPT ChatGPT is also extremely useful when searching for background music for videos. What keywords should I use to search for background music in Adobe Stock that fits this worldview? When asked, it suggested several words that fit the atmosphere, such as "futuristic," "sci-fi," and "cyberpunk." Step 4: Edit and Finish Stitch the generated video and background music together in Premiere Pro, and adjust the structure, length, and tempo. Adding fade-ins and fade-outs when switching scenes and varying the speed of the sounds can greatly improve the overall quality of your video. By combining the still images created in Midjourney with the smooth movements created in Runway, I was able to add a sense of "breath" and "atmosphere" that couldn’t be fully conveyed with still images alone, creating an image video that makes KTC AI WEB’s worldview feel even more real. https://www.youtube.com/watch?v=GH9CdNqTyHQ Giving Shape to Imagination with AI What I felt most during this process was that ChatGPT helped me to gradually "verbalize and materialize" the vague images in my head. Whether it was Midjourney or Runway, I felt that just by saying, "It’s a little different," or "It’s more like this," I was able to get closer to my ideal expression. By working together with AI, we’ll be able to greatly expand our creative horizons. Please give it a try.
アバター
Hello Hello, I'm hayashi-d1125, I joined the company in February 2025! In this article, I asked our new joiners from February 2025 to share their initial impressions after joining. I've compiled their thoughts here. I hope this content proves helpful for anyone interested in KINTO Technologies and offers a moment of reflection for the members who took part in the interview! Yasuharu Satake Self-introduction I'm Satake from the Project Promotion Group of the New Service Initiatives Division. I work as both a product manager (PdM) and project manager (PjM), handling new products and projects planned internally within the company. How is your team structured? The Project Promotion Group has a total of 15 members, six of whom make up the product management team that I’m part of. What was your first impression when you joined KTC? Were there any surprises? As an in-house development company within the Toyota Group, I was surprised to find how highly organized and well-developed the internal structure was. What is the atmosphere like on-site? Our team has a strong mutual support system—whenever someone has a question, it’s easy to get information or advice from other members. When we're in the office, we often go out for lunch together, and even outside of work, there are regular social events across divisions, creating a lively and friendly environment. How did you feel about writing a blog post? I used to read this Tech Blog before joining the company, but I didn't expect to be contributing so soon after joining. I'm very honored, and I enjoy sharing ideas and information, so I'd love to keep writing when the opportunity arises in the future. Question from Hiraku Kudo to Yasuharu Satake How do you interact with members of other divisions? I actively participate in cross-division events like Bear Bash and club activities within KINTO Technologies, which help me build connections across the company. In particular, at Bear Bash, I performed as a DJ for the event's background music, which gave me the chance to interact with many colleagues! Yurie Wakisaka Self-introduction I work in the Corporate Planning Group of the Development Support Division. I mainly handle financial back-office tasks such as billing and budgeting at KINTO Technologies. How is your team structured? Our team is made up of six members, and we share the workload by dividing tasks among ourselves. What was your first impression when you joined KTC? Were there any surprises? I was surprised at how quickly decisions are made and turned into action! What is the atmosphere like on-site? I find the on-site atmosphere to be very collaborative. Since our team is distributed across various locations, most of our communication happens remotely. However, we hold regular meetings to maintain clear communication and keep projects on track. How did you feel about writing a blog post? I'm not very good at writing, but I saw this as a great opportunity and decided to take it on with a positive attitude. Question from Yasuharu Satake to Yurie Wakisaka What differences have you noticed between KINTO Technologies and your previous workplaces? I feel the company invests generously in learning, such as study groups and seminars. Xiaolong Yang Self-introduction I'm Yang from the Salesforce Development Group in the Business System Development Division. I work on maintenance and development for KINTO FACTORY. How is your team structured? Our group consists of six members, including myself. What was your first impression when you joined KTC? Were there any surprises? I felt "freedom." From dress code to flexible working hours. What is the atmosphere like on-site? Everyone on the team is kind and approachable, making it easy to ask questions whenever I'm unsure about something. How did you feel about writing a blog post? I'm not so confident when it comes to writing about personal thoughts, so this was challenging for me. Question from Yurie Wakisaka to Xiaolong Yang What has been the biggest challenge you've faced since joining the company? Sometimes in meetings or chats, I encounter words or terms that I don’t understand. I'm still working hard on improving my Japanese! Yohei Hayashida Self-introduction I am Hayashida from the Platform Engineering Team within the Platform Group. I'm involved in developing, providing, and maintaining various tools for our development teams at KINTO Technologies. I'm based at the Osaka Tech Lab. How is your team structured? We have three members at the Osaka Tech Lab, and six at the Jinbocho Office. Since we work across different locations, we rely on communication tools like Slack and Teams. What was your first impression when you joined KTC? Were there any surprises? Given how well-developed the systems and workflows were, it was hard to believe the company had only been around for four years. On the other hand, there are still many areas within my own team that are yet to be developed, and I'm excited about the opportunities to take part in building it. What is the atmosphere like on-site? Osaka Tech Lab, where I work, started with just one person, and by the time I joined, the team had finally grown to three members. Since we're not based at the main Jinbocho Office, I sometimes feel a bit out of the loop with what's trending there. I think there's still room to improve communication across different locations. How did you feel about writing a blog post? I used to write blog posts at my previous job, so I didn't feel particularly uncomfortable or hesitant about it. But finding good topics is always a challenge regardless of the company (laughs), so I try to regularly explore new technologies to keep fresh ideas coming. Question from Xiaolong Yang to Yohei Hayashida How do you spend your days off? I have a family of five, my wife and children, so I spend most of my time off with them. Last week, we all drove to Chubu Centrair International Airport in Aichi Prefecture for a family outing. Sakura Kodama Self-introduction I'm involved in data analysis in the Analysis Production Group in the Data Analytics Division. How is your team structured? Our team consists of my boss, four senior colleagues, and myself. What was your first impression when you joined KTC? Were there any surprises? I was surprised at how thorough the orientation was. Aside from that, just as I had heard beforehand —appropriately flexible, so nothing came as a major surprise. What is the atmosphere like on-site? Everyone is calm and kind, but highly professional. An unexpected contrast that really struck me! (I had this stereotype that professionals are scary.) How did you feel about writing a blog post? When I first heard about it, I honestly thought, "This sounds like a pain." But once I started writing, it turned out to be a great way to reflect on where I am now, and I'm glad I gave it a try. It reminded me how important it is to take on new challenges. Question from Yohei Hayashida to Sakura Kodama What made you decide to work in data analysis? While working in the outsourcing industry, I was unexpectedly assigned to an access analytics team to fill a sudden vacancy at a client's office. I had no experience and didn't even know this kind of work existed, but once I got into it, I found myself wanting to dig deeper—and here I am now. Shuya Ogawa Self-introduction I'm Ogawa from the Salesforce Development Group in the Business System Development Division. I'm responsible for maintaining the Factory BO system. How is your team structured? Our team consists of one manager and five members. We handle Salesforce operations and maintenance, data integration, and projects related to Salesforce. What was your first impression when you joined KTC? Were there any surprises? I thought that working in a team of engineers would mean it might be hard to ask questions without a certain level of technical knowledge. But the atmosphere was very open, and I found it easy to ask for help. They responded quickly and were genuinely supportive. What is the atmosphere like on-site? As I mentioned above, the atmosphere is really open and approachable. How did you feel about writing a blog post? I have never written a blog post before, so I'm grateful for the opportunity. Question from Sakura Kodama to Shuya Ogawa **How do you refresh during work breaks? ** When I work from home, I go for a 30-minute run during my lunch break. When you're running, you don't have time to think, so you're forced to clear your mind. When I work in the office, I'm still figuring out the best way. Saki Yasuda Self-introduction My name is Yassan and I'm now working at the Cloud Infrastructure Group in the Platform Development Division. As the department name suggests, I work on the cloud infrastructure that supports our service platforms. How is your team structured? The Cloud Infrastructure Group has nine members, but is further divided into smaller teams. What was your first impression when you joined KTC? Were there any surprises? I came in expecting a rigid environment with lots of strict rules, but the reality was quite the opposite. The atmosphere was casual with open communication across all levels. Even in chat, people casually use stickers, which was a surprise for me. What is the atmosphere like on-site? At my previous job, it was hard to bring up work-related question because of the "quiet" atmosphere. Now, I can discuss things with people right away, and we get along well as a team, so we always go out to eat lunch together.♪ How did you feel about writing a blog post? I used to read this blog before joining the company, so it feels really special to be writing for it now! Question from Shuya Ogawa to Saki Yasuda Do you have any favorite lunch spots around the Jinbocho Office? I highly recommend a restaurant I recently visited called Mori no Butchers. The lunch menu included hearty beef and pork steaks—they were absolutely delicious! I went around 11:30 and still had to wait 30 minutes, but it was totally worth it! Hikaru Kudo Self-introduction I'm Kudo and I've joined the Engagement Group in the Mobility Product Development Division. My role is to support the digital transformation (DX) of operations within dealerships. How is your team structured? Our team is made up of three members, including the manager. We collaborate closely with other development teams in the division and the KINTO Sales department, working directly with dealerships to understand their needs for digital transformation. What was your first impression when you joined KTC? Were there any surprises? I got the impression that there were a lot of engineers around. Since I hadn't worked so closely with engineers before, seeing everyone's monitors filled with code was a fresh experience for me. What is the atmosphere like on-site? I’m frequently out visiting dealerships, but I’m always inspired by how everyone prioritizes the dealers’ needs when crafting proposals. How did you feel about writing a blog post? I have never been involved in a company blog before, so knowing this will be published makes me a bit nervous. Question from Saki Yasuda to Hiraku Kudo How do you think generative AI could be used to boost engagement? We already have products that use generative AI to suggest alternative vehicle options to customers. I see great potential in utilizing it to internal tasks like streamlining inquiry handling. There are many ways generative AI can enhance operational DX at dealerships. Conclusion Thank you everyone for sharing your thoughts on our company after joining it! There are more and more new members at KINTO Technologies every day! We'll be posting more new-joiner stories from across divisions, so stay tuned! And yes — we're still hiring! KINTO Technologies is looking for new teammates to join us across a variety of divisions and roles. For more details, check it out here !
アバター
Introduction Hello! My name is Kameyama and I work as a web engineer at KINTO Technologies. I currently work in the Marketing Product Group. I this article I will talk about how I built a serverless architecture. With container-based applications like those running on ECS, you're charged for CPU and memory usage based on uptime, even when there are no incoming requests. This means you can end up paying for resources you’re not actually using , especially in PoC development or in products with very low traffic. For these types of use cases, it is possible to significantly reduce running costs by using a pay-as-you-go serverless architecture, in which the server runs only when in use and automatically stops if no processing is performed for a certain period of time. To achieve this, we built a Lambda-based application with the following key points: Serverless development using AWS API Gateway + Lambda Simple and versatile API design with TypeScript + Express About Serverless We decided to adopt Lambda, which is widely used as part of AWS's serverless configuration. As mentioned earlier, Lambda automatically handles server startup, shutdown, and scaling, and its pay-as-you-go pricing means you are charged only for what you use, minimizing costs. On the other hand, a disadvantage of such serverless APIs is that response delays due to cold starts occur. This is especially true in environments with a small number of requests or when there has been no access for a certain period of time, Lambda goes into sleep mode, and when a request comes again, it takes time for the container to start up (actual measured response time is about 1 second). In summary, this infrastructure configuration is especially recommended for those who want to quickly build a prototype or develop a tool for users who can tolerate response delays (such as internal members) . How Much Cheaper with Lambda? Let's compare the costs of Fargate, an always-running container, and Lambda, the serverless type we will use this time. Fargate AWS Fargate costs Assuming 0.5 vCPU and 2GB of memory, the estimated operating cost per task per hour is as follows: vCPU cost: 0.5 vCPU x $0.04048 per vCPU-hour = $0.02024/hour Memory cost: 2GB x $0.004445 per GB-hour = $0.00889/hour Based on these calculations, the total cost per hour is $0.02024 + $0.00889 = $0.02913. If the task runs continuously for a full month (720 hours), the monthly cost per task would be $20.9736 . (However, you can save the cost by shutting down at night or lowering the vCPU specs.) This is the cost per environment, so if you need multiple environments, such as a production and development, the total cost will scale accordingly. Lambda AWS Lambda cost On the other hand, Lambda costs are calculated based on the number of requests and the compute time of the container temporarily activated in response to those requests. 0.00001667 USD per GB-second $0.20 per 1,000,000 requests Assuming 2GB like Fargate, a compute time of 0.5 seconds per request, and 100,000 requests per month, the total monthly cost for Lambda is $0.02 (request cost) + $1.6667 (compute cost) = approximately $1.69 per month. Even better, even if you increase the number of environments or the number of Lambdas per environment, the total cost remains the same as long as the total number of requests is unchanged. These cost simulations demonstrate the cost advantages of Lambda. This kind of cost reduction is especially beneficial for low-traffic internal tools that don't generate revenue, or for PoC products, as it helps lower financial barriers. About Express We adopted Express as the server-side JavaScript framework. Express is designed to allow the intuitive understanding of the concepts of routing and middleware. Its configuration is easy to handle even for developers doing server-side development with Node.js for the first time. Express allows smooth scaling from small APIs to medium and large applications. The routing description is also concise. app.get('/users/:id', (req, res) => { res.send(`User: ${req.params.id}`); }); You can easily incorporate a wide range of middleware libraries depending on your needs, such as morgan for log output, passport for authentication, express-validator for input validation, etc. This makes it easier to add features to and maintain your application. It is possible to build an endpoint using the Lambda library officially distributed by AWS, but if you build it using the general-purpose library Express, it will be easier to reuse the code after the endpoint when switching to ECS or App Runner as the scale of your application expands, rather than using a Lambda-specific library. Development Policy In this article, I adopted a configuration in which multiple API endpoints are consolidated into a single Lambda function . This is to make the most of Lambda's "hot start" feature. Once a Lambda function is started, it remains in memory for a certain period of time, which is called a "hot start" state. Therefore, after one API is requested and Lambda is launched, requests to other APIs within the same function can also be processed speedily. By taking advantage of this property, you can expect improved performance during operation. However, Lambda has a limit on the deployable package size (50MB or less when zipped and 250MB or less after unzipped), so if you pack all the APIs in your application into a single function, you will eventually reach this limit, making it unrealistic. For this reason, I will assume a structure in which related APIs are grouped into the same Lambda function by screen or functional unit . Ultimately, I will proceed on the assumption of a monorepo structure in which multiple Lambda functions are managed within a single repository. In this article, the goal is to enable local execution using SAM, and I will omit the configuration of the AWS console or what happens after deployment. Environment Building (Preparation Before Coding) In this article, I will explain how to build an environment that combines pnpm, which makes it easy to manage multiple Lambda functions and shared code, with AWS SAM. The entire project is managed as a pnpm workspace, and each Lambda function and common library is treated as an independent workspace. The deployment tool used is AWS SAM (Serverless Application Model). Mainly, the following tools are required. Node.js pnpm AWS CLI AWS SAM CLI Git (version control) Git installation is omitted. Installing Required Tools Node.js Node.js is required as before. You can install the LTS version from the official website. Node.js official website After installation, check that the version is displayed with the following command. node -v npm -v # pnpmをインストールするために使用する pnpm Use pnpm to manage dependent libraries. pnpm is particularly good at resolving dependencies and the efficient use of disk space in a monorepo configuration where multiple modules (Lambda functions) are managed in a single repository. Install pnpm using the following method: npm install -g pnpm For methods using curl or others, please refer to the official pnpm website. pnpm installation guide After installation, check the version with the following command: pnpm -v AWS CLI As before, the AWS CLI is required for linkage with AWS. Install it and set up your credentials using aws configure. AWS CLI Installation Guide AWS SAM CLI This time I will use AWS SAM (Serverless Application Model) as the deployment tool. AWS SAM is an infrastructure as code (IaC) framework for serverless applications, and the SAM CLI supports local build, testing, and deployment. Refer to the official website below and install AWS SAM CLI according to your operating system. AWS SAM CLI Installation Guide After installation, check the version with the following command: sam --version Project Structure and Workspace Setup In the root directory of the project, place package.json , which defines the config files for the entire monorepo and the dependencies of tools commonly used during development (e.g., esbuild). Each Lambda function and common library is created as an independent subdirectory, for example, inside the functions directory, and these are defined as pnpm workspaces. Using the provided structure as a reference, I will explain the basic structure and configuration files. sample-app/ # (ルートディレクトリ) ├── functions/ │ ├── common/ # 共通コード用ワークスペース │ │ ├── package.json │ │ ├── src/ │ │ └── tsconfig.json │ ├── function-1/ # Lambda関数1用ワークスペース │ │ ├── package.json │ │ ├── src/ # Expressアプリやハンドラコード │ │ └── tsconfig.json │ └── function-2/ # Lambda関数2用ワークスペース │ ├── package.json │ ├── src/ │ └── tsconfig.json ├── node_modules/ # pnpmによって管理される依存ライブラリ ├── package.json # ルートのpackage.json ├── pnpm-lock.yaml # ルートのロックファイル ├── pnpm-workspace.yaml # ワークスペース定義ファイル ├── samconfig.toml # SAM デプロイ設定ファイル (初回デプロイで生成) └── template.yaml # AWS SAM テンプレートファイル Root package.json This defines scripts and development tools (such as esbuild) shared across the entire repository. package.json { "name": "sample-lambda-app-root", // プロジェクト全体を表す名前 "version": "1.0.0", "description": "Serverless Express Monorepo with SAM and pnpm", "main": "index.js", "private": true, // ルートパッケージは公開しない設定 "workspaces": [ "functions/*" // ワークスペースとなるディレクトリを指定 ], "scripts": { "build": "pnpm -r build", // 全ワークスペースの build スクリプトを実行 "sam:build": "sam build", // SAMでのビルド (後述) "sam:local": "sam local start-api", // SAMでのローカル実行 (後述) "sam:deploy": "sam deploy" // SAMでのデプロイ (後述) }, "devDependencies": { "esbuild": "^0.25.3" // 各ワークスペースのビルドで使う esbuild をルートで管理 // 他、monorepo全体で使う開発ツールがあればここに追加 }, "keywords": [], "author": "", "license": "ISC" } pnpm-workspace.yaml This defines which directories should be handled as workspaces. pnpm-workspace.yaml packages: - 'functions/*' # `functions` ディレクトリ内の全てのサブディレクトリをワークスペースとする # - 'packages/*' # 別のワークスペースグループがあれば追加 Dependency Management (pnpm workspaces) Describe the dependent libraries required for each Lambda function or common library in the package.json inside each workspace. Example: functions/function-1/package.json { "name": "function-1", // ワークスペースの名前 "version": "1.0.0", "description": "Lambda Function 1 with Express", "scripts": { "build": "esbuild src/app.ts --bundle --minify --sourcemap --platform=node --outfile=dist/app.js", // esbuildでビルド "start:dev": "nodemon --watch src -e ts --exec \"node dist/app.js\"", // ローカルテスト用のスクリプト (SAM Localとは別に用意しても良い) "tsc": "tsc" // 型チェック用 }, "dependencies": { "@codegenie/serverless-express": "^4.16.0", // Lambdaアダプター "express": "^5.1.0", "@sample-lambda-app/common": "workspace:*" // 共通ライブラリへの依存 }, "devDependencies": { "@types/aws-lambda": "^8.10.138", // Lambdaの型定義 "@types/express": "^4.17.21", "nodemon": "^3.1.0", "typescript": "^5.4.5" // esbuild はルートの devDependencies にあるのでここでは不要 }, "keywords": [], "author": "", "license": "ISC" } @sample-lambda-app/common : This refers to the functions/common workspace. By designating "workspace:*" , the local common workspace will be referred to. It needs to be defined as "name": "@sample-lambda-app/common" in package.json on the common workspace side. scripts.build : This is an example of using esbuild to bundle TypeScript code and dependent libraries together into a single JavaScript file (dist/app.js). This is an important step to reduce the package size deployed to Lambda. To install dependent libraries, run pnpm install only once in the root directory of the project. pnpm looks at pnpm-workspace.yaml and resolves the dependencies described in package.json for each workspace, efficiently configuring node_modules . pnpm install To add a library to a specific workspace, run the following command from the root directory: pnpm add <package-name> -w <workspace-name> # 例: pnpm add axios -w functions/function-1 pnpm add -D <dev-package-name> -w <workspace-name> # 開発依存の場合 Let's Actually Write Some Sample Code The directory configuration explained earlier includes two function modules, function-1 and function-2 , to create a multi-function configuration, as well as a module called common so that these functions can use it as a shared component. Now let’s write some actual code. Common Code First, let's write a sample middleware function in common, which is a common component. functions/common/src/middlewares/hello.ts import { Request, Response, NextFunction } from 'express'; /** * サンプル共通ミドルウェア * リクエストログを出力し、カスタムヘッダーを追加します。 */ export const helloMiddleware = (req: Request, res: Response, next: NextFunction) => { console.log(`[Common Middleware] Received request: ${req.method} ${req.path}`); // レスポンスにカスタムヘッダーを追加 res.setHeader('X-Sample-Common-Middleware', 'Applied'); // 次のミドルウェアまたはルートハンドラに進む next(); }; 続いて、middlewares/内のエクスポートを追加します。 functions/common/src/middlewares/index.ts export * from './hello'; // middlewares内に他のミドルウェアがあればここに追加していく さらにワークスペースのトップレベルのsrc/でもエクスポートしてあげる必要があります。 functions/common/src/index.ts export * from './middlewares'; // middlewaresのような共通処理が他にあればここに追加していく(utilsとか) Code for function-1 Next, I will write the code for function-1. functions/function-1/src/app.ts import express from 'express'; import serverlessExpress from '@codegenie/serverless-express'; import { helloMiddleware, errorHandler } from '@sample-lambda-app/common'; // 共通ミドルウェア、エラーハンドラをインポート // apiRouter のインポートは不要になりました // import apiRouter from './routes/api'; // import cookieParser from 'cookie-parser'; // 必要に応じてインストール・インポート const app = express(); // express標準ミドルウェアの適用 app.use(express.json()); // JSONボディのパースを有効化 // app.use(cookieParser()); // クッキーパースが必要な場合このように追加する // 共通ミドルウェアの適用 app.use(helloMiddleware); app.get('/hello', (req, res) => { console.log('[Function 1 App] Handling GET /hello'); res.json({ message: 'Hello from Function 1 /hello (Simplified)!' }); }); app.post('/users', (req, res) => { console.log('[Function 1 App] Handling POST /users'); console.log('Request Body:', req.body); // JSONボディをログ出力 res.status(201).json({ received: req.body, status: 'User created (sample)' }); }); // common等にエラーハンドラミドルウェアを作成し、使用する場合は全てのミドルウェアとルート定義の後に配置する。 // app.use(errorHandler); // 本記事では作成していない // ハンドラのエクスポート export const handler = serverlessExpress({ app }); Note: In the API Gateway configuration in template.yaml that will be done later, the path without /function1 will be passed, so the route defined here will be a relative path from the API Gateway base path. For example, if a request to API Gateway is /function1/hello, it will match the /hello defined here. Code for Function-2 functions/function-2/src/app.ts import express from 'express'; import serverlessExpress from '@codegenie/serverless-express'; // ★アダプターをインポート★ import { helloMiddleware, errorHandler } from '@sample-lambda-app/common'; // 共通ミドルウェア、エラーハンドラをインポート // ルーターファイルは使用しないためインポート不要 // import apiRouter from './routes/api'; // import cookieParser from 'cookie-parser'; // 必要に応じてインストール・インポート const app = express(); // express標準ミドルウェアの適用 app.use(express.json()); // JSONボディのパースを有効化 // app.use(cookieParser()); // クッキーパースが必要な場合このように追加する // 共通ミドルウェアの適用 app.use(helloMiddleware); // ルートをごとに処理を定義 app.get('/bye', (req, res) => { console.log('[Function 2 App] Handling GET /bye'); res.json({ message: 'Goodbye from Function 2 /bye!' }); }); app.post('/items', (req, res) => { console.log('[Function 2 App] Handling POST /items'); console.log('Request Body:', req.body); // JSONボディをログ出力 res.status(201).json({ received: req.body, status: 'Item created (sample)' }); }); app.get('/status', (req, res) => { console.log('[Function 2 App] Handling GET /status'); res.json({ status: 'OK', function: 'Function 2 is running (Simplified)' }); }); // common等にエラーハンドラミドルウェアを作成し、使用する場合は全てのミドルウェアとルート定義の後に配置する。 // app.use(errorHandler); // 本記事では作成していない // ハンドラのエクスポート export const handler = serverlessExpress({ app }); Since this is just a sample, all the processing within the route is written using arrow functions, but in actual development, if the processing becomes complicated it may be better to consolidate the processing into a separate ts file. Also, during development, there may be times when you want to use different middleware for each route. In such a case, you can create an API router more flexibly by using the express Router library, so please look into it and give it a try. (Reference: https://expressjs.com/en/guide/routing.html https://expressjs.com/ja/guide/routing.html ) Preparing to Locally Run SAM AWS SAM template (template.yaml) Create a template.yaml file in the project route to define the AWS resources to be deployed. Describe Lambda functions, API Gateway, necessary IAM roles, and others. template.yaml AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Sample Serverless Application Globals: # Functions 全体に適用する共通設定 (メモリサイズやタイムアウトなど) Function: Timeout: 30 MemorySize: 256 # 適宜調整する Runtime: nodejs20.x Architectures: - x86_64 Environment: Variables: NODE_ENV: production Resources: # function-1 ワークスペースに対応するLambda関数リソース定義 Function1: Type: AWS::Serverless::Function # AWS SAMで定義するサーバーレス関数 Properties: FunctionName: sample-express-function-1 # AWSコンソールに表示されるLambda関数名 (任意) Description: Express App for Function 1 (Simplified) # CodeUri は SAM がコードをパッケージングする際のソースディレクトリを指す。 # ここには、sam build 前のソースコードがあるディレクトリを指定。 CodeUri: functions/function-1/ # Handler は、sam build によって生成された成果物の中でのエントリーポイントを指す。 # esbuild が src/app.ts を dist/handler.js にバンドルし、 # その中で 'export const handler = ...' を CommonJS の 'exports.handler = ...' に変換するため、 # 'ファイル名(拡張子なし).エクスポート名' と記述する。 Handler: handler.handler Events: # API Gateway からのトリガー設定 Function1Api: Type: Api # API Gateway REST APIをトリガーとする Properties: Path: /function1/{proxy+} # 許可するHTTPメソッド (ANYは全てのメソッドを許可) Method: ANY # function-2 ワークスペースに対応するLambda関数リソース定義 Function2: Type: AWS::Serverless::Function Properties: FunctionName: sample-express-function-2 # AWSコンソールに表示されるLambda関数名 (任意) Description: Express App for Function 2 (Simplified) # CodeUri は function-2 ワークスペースのソースディレクトリを指す CodeUri: functions/function-2/ # Handler は function-2 のビルド成果物の中でのエントリーポイントを指す Handler: handler.handler Events: # API Gateway からのトリガー設定 (function-2用) Function2Api: Type: Api Properties: # Function 2 が処理するAPI Gatewayパス Path: /function2/{proxy+} Method: ANY Transform: AWS::Serverless-2016-10-31 : This indicates a SAM template. Resources : This defines the AWS resources to be deployed. Type:AWS::Serverless::Function : This is a Lambda function resource. CodeUri : This specifies the directory where the code to be deployed as a Lambda function is located. This specifies the location of the build artifact for each workspace, such as functions/function-1/dist . Handler : This specifies the function name in the code that is called first when the Lambda function is executed. This becomes the function name exported in the bundled file ( dist/app.js ). Events : This sets the events that trigger the Lambda function. Type: Api is a setting that triggers an HTTP request from API Gateway. This setting links to a specific endpoint using Path and Method . /{proxy+} is a notation that catches all requests under the path. Local Development and Testing (AWS SAM CLI) The AWS SAM CLI allows you to emulate and test Lambda functions and API Gateway in your local environment. Build of each workspace : First, build the source code for each workspace into JavaScript. You can use the scripts defined in the root directory. pnpm run build # functions/* 以下のそれぞれの build スクリプトが実行される This generates build artifacts such as functions/function-1/dist/app.js . SAM build : Next, AWS SAM runs a build to create a package for deployment. sam build This command reads template.yaml , copies the build artifacts from the location specified by CodeUri: to a location under the .aws-sam/build directory, and organizes them into the format required by Lambda. Local API startup : The Local API feature provided by SAM CLI allows you to emulate API Gateway and run Lambda code locally. sam local start-api After the command is executed, a local server will start at a URL such as http://127.0.0.1:3000 . By accessing the path defined in template.yaml (e.g., /function1/hello ) via a browser, Postman, or curl, the Lambda function will be executed locally. After changing the source code during local development, you can either re-run pnpm run build → sam build → sam local start-api or use the sam local start-api --watch option to monitor code changes. (The --watch option automatically restarts the build and emulation, but depending on the actual environment configuration, some adjustments may be required.) Conclusion This time, I presented how to locally run a serverless TypeScript using Lambda and Express. To actually release the product, it is necessary to build up AWS infrastructure and make appropriate settings. Since this was my first attempt with Express and a monorepo configuration, I ran into some difficulties. I have provided detailed explanations as a reminder, so this article may have ended up being a bit long. I hope this will be of some help to others who are facing similar challenges.
アバター
こんにちは、Hoka winterです。 KINTOテクノロジーズ(以下、KTC)では、約1年にわたり グーグル・クラウド・ジャパン合同会社が2023年9月に公開した イノベーションを生み出す組織環境づくりのためのリーダーシップ・プログラム:10X innovation culture programを実施してきました。 今回はいつも実施している10Xとは別に、10x Innovation Culture Pitch練習会を受けた話をします。 10x Innovation Culture Pitch練習会とは この研修の目的は、社内で「10X Innovation Culture Program」を実施するために必要なファシリテーション力を養うことです。そのためには、「10X Innovation Culture Program」に対する深い理解が必要です。この研修は、その理解を深めるためのものです。 KTCがこの研修を受けるのは2回目です。前回はマネージャーを中心に研修を受けてもらい、その後、10Xの進行が格段に良くなったので、今回はチームリーダーを中心に有志のメンバーに参加いただきました。 10x Innovation Culture Pitch練習会は、大きく分けて2つの構成です。1つは「イノベーションを生み出すための6つの要素をインプット」し、もう1つは「自分の言葉でアウトプットする」という研修です。 ![](/assets/blog/authors/hoka/20250714/image6.png =600x) 研修準備 これまでGoogleの皆さんに10Xを教えていただきながら学んだことは、少しずつKTC側の難易度が上がっていくということ。 1回目の研修はKTC全員が「参加するだけ」でしたが、2回目の今回は、「KTC社員がカルチャーセッションのプレゼンター」を担当することになりました。つまり、参加者に対し「イノベーションを生み出すための6つの要素をインプット」する重要な役割です。 ![](/assets/blog/authors/hoka/20250714/image3.png =600x) 有難いことにプレゼンテーションスライドはGoogleの皆さんが用意してくれたので、私たちKTCは6つの要素を読み上げるだけ。 それだけのことなのに、すごく難しかったのです!!! 6つの要素には、イノベーションを起こす組織であるためのGoogleの考え方や事例がたくさん記載されています。しかし、これを読むだけでは参加者の心に届きません。KTCのエピソードや、自分の体験談を交えながら、自分の言葉で話せるようになるまで、何度も練習しました。 特に、1回目の研修でプレゼンターを担当したGoogleの方々を思い出し、堂々とした話し方、聞きやすいスピードを意識しました。 研修当日 さて、いよいよ迎えた本番。渋谷のGoogleオフィスに27名が集まりました。今回も大阪、名古屋、東京から参加しています。 Googleのkotaさんによるオープニングトークでスタート。いつもありがとうございます。 続いて、一番10Xをリードしてくれている部長のきっしーは、名古屋オフィスからオンラインで激励メッセージをくれました。 参加者は「え?今から何が始まるの?この研修、何?」という空気の中、私たちプレゼンターが1テーマずつ発表をしていきました。参加者の方に10Xをインプットしてもらえるでしょうか。 オリジナルストーリーでプレゼンしたあわっち、緊張しすぎる私、オンラインで登壇するゆきき、先生のように落ち着いているなべやん、本番に最高演技できるみずき、笑わせる余裕のあるおたけ。みんな今日が一番うまくできました(自画自賛)。 参加者アンケートにおいても「カルチャーセッションが良かった」を選んでくれた人が10人もいました。また直接「前回のGoogleプレゼンターに劣らないくらい素晴らしかったよ。」「スライドを見てプレゼンを聞くだけで、スーッと話が入って来た。」と声をかけてもらえたのも嬉しかったです。 続いて、アウトプットの時間です。 1チームあたり6人+Googler1人で各部屋に移動し、先ほどのプレゼンターと同じように一人ずつプレゼンテーションをしていきます。20分×6人、計120分の集中アウトプットタイムです。 参加者は、先ほどプレゼンターが使用したスライドと同じものを使って、一人10分ずつプレゼンテーションをしました。プレゼンテーションをする前に読み込む時間は5分。 プレゼンテーションを聞いている間、他のメンバーはフィードバックシートに良かったところと改善点を記入していき、プレゼンテーション後にフィードバックしていきます。 ![](/assets/blog/authors/hoka/20250714/image1.png =600x) 私はDチームに参加していたのですが、「みんな家で練習してきたのかな?」と疑ってしまうくらい上手でした。フィードバックタイムは自然とプレゼンテーションの良かったところを話し合い、ディスカッションが盛り上がりました。例えば、以下のようなコメントが上がりました。 スライドや台本にとらわれず、要約しながら話す 自分の言葉で話す 失敗エピソードで共感を生む 聞いている人に寄り添って、正論を押し付け過ぎない 「やる気スイッチ」などキャッチコピー作るのが上手い、分かりやすい ![](/assets/blog/authors/hoka/20250714/image7.png =600x) 事後アンケートではプログラムの満足度は平均4.7点と非常に高かったです。 また、「プログラムの内容で良かった点」として以下の項目を選択していました。(n=22、複数回答) 他の参加者のプレゼンを聴けて良かった: 20人 自分が練習する機会があってよかった: 17人 他者からのフィードバックをもらえるのがよかった: 21人 クロージング プレゼンテーションが終わった後、最初のセミナールームで総括を行いました。他のグループはどんな感じだったのかな?と思っていたら、先ほどのフィードバックシートをGoogleの生成AI 「Gemini」を使って総括してくださいました。 ![](/assets/blog/authors/hoka/20250714/image4.png =600x) 後日、他のグループのフィードバックシートを見ようと思っていたのですが、その場でGeminiを介してテキスト化し全員に共有され、まさに「Feedback is a gift!」なシーンでした。 研修内容だけでなく、短時間でインストールする方法、フィードバックシートの活用方法や、他グループの情報を共有する方法など、どうやって学ぶとより効率が良いかもたくさん教えていただきました。 Googleの皆さん、本当にありがとうございました。 今後について 今回の研修を通して、「管理職以外のメンバーにも難易度の高い10x Innovation Culture Pitch練習会は効果的である」ということが分かったので、2025年度は10x Innovation Culture Pitch練習会をKTCでも実施していきたいと思います。 KTCのイノベーションを生み出すための挑戦はまだまだ続きます。 ![](/assets/blog/authors/hoka/20250714/image8.png =600x)
アバター
Hello. I’m @p2sk from the DBRE team. The DBRE (Database Reliability Engineering) team is a cross-functional organization focused on resolving database issues and developing platforms. Recently, I had the opportunity to contribute to the OSS repository terraform-provider-aws . Specifically, I implemented a new resource called aws_redshift_integration that enables users to manage managed data integrations between DynamoDB or S3 and Redshift, officially made available by AWS in October 2024, using Terraform. The PR has already been merged and released in v5.95.0 , and the feature is now available. This was my first OSS contribution, so I was a little worried about whether I could complete it, but with the help of generative AI, I was able to see it through to creating the PR. It can sometimes take months after a new AWS feature becomes GA before it’s supported in Terraform (implemented as a new resource in terraform-provider-aws). In such cases, I felt that it was a huge advantage to have the option to implement it myself instead of waiting for official support. That’s why this article is for anyone looking to make their first contribution like me by adding a new resource to Terraform’s AWS Provider, and aims to share insights to help you work efficiently from the beginning. Maybe in the future we’ll be able to just hand an issue over to a coding agent and have it generate the entire PR, but at the moment it seems quite difficult. I hope this article will be helpful to anyone in a similar situation. About the Resource I Added The resource I implemented enables management of two AWS features. Each is briefly described below: Zero-ETL integration from DynamoDB to Redshift Event integration from S3 to Redshift Zero-ETL integration is a managed data integration feature that eliminates the need to build an ETL pipeline. The "zero-ETL integration" feature was initially launched as a data integration between Aurora MySQL and Redshift, and has since expanded support to multiple other sources and targets. Here’s the architecture diagram: ![Architecture diagram for DynamoDB to Redshift zero-ETL integration](/assets/blog/authors/m.hirose/2025-04-23-11-13-17.png =700x) Source: AWS - Getting started with Amazon DynamoDB zero-ETL integration with Amazon Redshift Similarly, event integration from S3 to Redshift allows files added to an S3 bucket to be automatically and quickly integrated into Redshift. Although these two features are technically separate, they share the same API for creating an integration . Since resources in terraform-provider-aws are mapped directly to APIs, supporting this API in Terraform makes it possible to implement both features at the same time. So in the end, I only needed to add one resource. Criteria for Adding Resources The official documentation states the following : New resources are required when AWS adds a new service, or adds new features within an existing service which would require a new resource to manage in Terraform. Typically anything with a new set of CRUD API endpoints is a great candidate for a new resource. Japanese translation: AWS が新しいサービスを追加したり、既存のサービスに新しい機能を追加したりする場合、Terraform で管理するための新しいリソースが必要になります。 一般的に、新しい CRUD API エンドポイントセットを持つものは、新しいリソースの候補として最適です。 So, having a new set of CRUD API endpoints is a major factor in deciding whether a new resource should be added. In this case, the criteria were met, so I went ahead and implemented a new resource. Contribution Flow The process is very well explained in the official documentation . Configure Development Environment Debug Code (Skipped this time because it was a new resource) Change Code Write Tests Continuous Integration Update the Changelog Create a Pull Request Based on the above items, the steps recommended in this article are summarized below. In addition, the effort levels marked using ★ based on my own experience. You’ll need to check the official documentation for detailed instructions, but I hope these notes from actually doing it will help you get a smoother start. Investigate or create related issues ★ Preliminary research on the relevant AWS API and SDK ★ Configure development environment ★ Validate the target resource and code dependent resources ★★★ Generate boilerplate code using the scaffolding tool ★ Modify the code and check if it works ★★★★★ Write a test ★★★ Run a continuous integration test locally ★★ Update the documentation ★ Create a pull request ★ Create and push a changelog ★ Before diving into the details of each step, I want to first highlight a few things that are good to know before starting development. Points to Note Mixed coding styles due to multiple SDKs In terraform-provider-aws, the repository contains two different patterns using different SDKs. Terraform plugin framework The new SDK recommended for use at this time Terraform Plugin SDKv2 No longer recommended for new development, However, it’s still used for maintaining and fixing existing resources. There may still be code for the unsupported v1 version, so in reality, there are three possible patterns. Therefore, if you’re using generative AI to assist with research or coding, it’s a good idea to include the Terraform Plugin Framework as a target in your prompt. If you’re interested in the historical background of this, check out ChatGPT Deep Research’ results , though keep in mind there’s always a chance of hallucination. Licensing Terraform itself changed its license to BSL in 2023, which means it is no longer defined as OSS, but terraform-provider-aws will still remain OSS under the MPL 2.0. Various providers appear to be used in opentofu , which was forked from Terraform. The AWS Provider for opentofu is also forked from terraform-provider-aws, so by contributing to the provider, you’ll indirectly contribute to both Terraform and opentofu. If you’re interested in the details behind this, check out ChatGPT Deep Research’s results . (Take the same precautions regarding hallucination.) The following section explains the actual steps. Note that the test execution times mentioned in this article are approximate values based on the following environment. Machine: MacBook Pro Chip: Apple M2 Pro Memory: 32 GB 1. Investigate or Create Related Issues When creating a PR, include a link to the related issue (e.g. "Relations" in the image below). So, first search for a related issue and if you don’t find one, create one. ![Related issue description](/assets/blog/authors/m.hirose/2025-04-17-12-45-46.png =700x) If an issue has already been created, someone else might be working on it. Be sure to skim through the comments to check whether it looks like work has already started. In this case, an issue already existed, so I simply linked to it when I created the PR. 2. Preliminary Research on the Relevant AWS API and SDK To implement a new resource, the Go SDK (aws-sdk-go-v2) must support the relevant CRUD operations for the resource. I assume that the SDK will generally be provided at the same time as it becomes GA, but there may be some lag. The go.mod in terraform-provider-aws also needs to be updated to a version that corresponds to the relevant resource, but it seems to be updated frequently by the maintainers , so in many cases you won’t need to do it yourself, it’s likely already up-to-date. This time, I found it convenient to bookmark the following references so I could refer to them whenever I wanted during development. They’re also useful if you want to feed them into a generative AI for reference. API Reference https://docs.aws.amazon.com/ja_jp/redshift/latest/APIReference/API_CreateIntegration.html https://docs.aws.amazon.com/ja_jp/redshift/latest/APIReference/API_ModifyIntegration.html https://docs.aws.amazon.com/ja_jp/redshift/latest/APIReference/API_DeleteIntegration.html https://docs.aws.amazon.com/ja_jp/redshift/latest/APIReference/API_DescribeIntegrations.html SDK Reference https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/redshift#Client.CreateIntegration https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/redshift#Client.ModifyIntegration https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/redshift#Client.DeleteIntegration https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/redshift#Client.DescribeIntegrations Initially, my motivation was to make DynamoDB zero-ETL integration compatible with Terraform, but when I looked through the references, I found that the API’s SourceARN parameter pattern also supported S3, as shown in the figure below. That’s when I realized I’d need to validate the S3 integration as well. Since the validation scope can end up being broader than expected, it’s a good idea to review all input and output in the reference before jumping in. ![CreateIntegration()のSourceARN](/assets/blog/authors/m.hirose/2025-04-17-15-43-29.png =700x) Source: AWS - Redshift CreateIntegration API Reference Also, depending on the type of resource, there may be no Delete or Modify available. In those cases, you only need to implement what’s provided. For example, with the zero-ETL integration between Aurora MySQL and Redshift, only Create / Delete / Describe were available at the time of GA, with Modify added later. Redshift has two SDK directories: redshift and redshiftserverless. I wasn’t sure whether I need to implement both, but since the relevant API didn’t exist under redshiftserverless , and the functions under redshift could also create integrations for serverless, I concluded that implementing it under redshift alone would be sufficient. 3. Configure Development Environment Just follow the steps in the official documentation and you should be good to go. However, running make testacc , which creates the actual resource and checks if it works, is unnecessary at this point. You may not need to run make test either, but it took around 30 to 40 minutes in my environment. By following the steps in the Using the Provider section, you’ll be able to run Terraform commands using the locally built provider. You can consider it working correctly if a warning like the one below appears during execution. This confirms that your locally built provider is being used when running Terraform. Although you can check if it works via the "acceptance test" described later, I found that using the local build directly with Terraform commands is a much faster way to iterate between building and testing. Personally, checking it this way felt more intuitive since it aligned with how I normally use Terraform. If you want to debug in more detail, you might find delve useful. . 4. Validate the Target Resource and Code Dependent Resources Before starting to code, it’s a good idea to check if the new AWS resource you’re planning to add works as expected. This helps build a deeper understanding of how it works. In this case, you will most likely need to create dependent resources before creating a new AWS resource. For example, in my case, the integration depended on the following AWS resources. (To be precise, the source can be either a provisioned Redshift, Redshift Serverless, or S3.) aws_vpc aws_subnet aws_security_group aws_dynamodb_table aws_dynamodb_resource_policy aws_redshift_subnet_group aws_redshift_parameter_group aws_redshift_cluster aws_redshift_resource_policy aws_kms_key aws_kms_alias aws_kms_key_policy aws_redshiftserverless_namespace aws_redshiftserverless_workgroup aws_s3_bucket aws_s3_bucket_public_access_block aws_s3_bucket_policy aws_iam_role aws_iam_role_policy aws_redshift_cluster_iam_roles I highly recommend coding the dependent resources as .tf files at this point. The reasons are as follows. If your validation and development cannot be completed in one day, it will be costly, so you’ll want to apply and destroy each time. You’ll need a similar configuration for "acceptance test" described later, so having it ready upfront will save time. Formatting with terraform fmt now will also make local CI testing smoother later on. I think you can speed up the HCL coding significantly by leveraging generative AI. After coding the dependent resources, you can use the AWS Console or CLI to manually create the target resource and validate its behavior. 5. Generate Boilerplate Code Using the Scaffolding Tool When adding new resources , it’s recommended to use a scaffolding tool called Skaff to generate the base code. The resource type name follows a specific naming rule : aws_${service name}_${AWS resource name} . The AWS resource name should match the function name used in the SDK. For example, in this case, the "CreateIntegration" function is provided, so the AWS resource name is "Integration." It seems best to use the value of the service directory in the repository as the service name. Therefore, the resource type name in this case is aws_redshift_integration . I also used this as the name of my feature branch, f-aws_redshift_integration . With Skaff, you just need to specify the AWS resource name, so after changing to the directory for the relevant service, I executed the following command. $ pwd /Users/masaki.hirose/workspace/terraform-provider-aws/internal/service/redshift $ skaff resource --name Integration Running Skaff generates three files: the resource code, test code, and documentation. You can view the generated file here , and it is a user-friendly file with extensive comments. Comparing these initial files to the final merged code also gives a clear picture of what needs to be modified. 6. Modify the Code and Check If It works Based on the generated code, I began modifying it so that it actually worked. As described in the documentation , the first step is implementing the resource schema, followed by the CRUD handlers. In the Terraform Plugin Framework, the CRUD handlers are named intuitively: "Create," "Read," "Update," and "Delete." For example, the first time you run terraform apply to create a new resource, the Create() function implemented here will be called. Within that, the corresponding function in the Go SDK (in this case CreateIntegration ) is executed, and internally the corresponding AWS API (in this case CreateIntegration ) is executed to create the resource. If terraform apply is used to perform modifications without replacing, the Update() function is executed, and if terraform destroy is used to delete the resource, the Delete() function is executed. Whenever resource information needs to be read, The Read() function gets called. Resource schema implementation In the Schema() function, you define the arguments that Terraform accepts and the attributes that it outputs as schema information. Define each field in the Attributes map, as shown in the code below. Each attribute is a struct whose key is a name in Terraform (snake case) and whose value implements the schema.Attribute interface, using an appropriate one from schema.MapAttribute, or schema.StringAttribute. // 修正後の Schema() 関数の一部を抜粋 func (r *integrationResource) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { resp.Schema = schema.Schema{ Attributes: map[string]schema.Attribute{ "additional_encryption_context": schema.MapAttribute{ CustomType: fwtypes.MapOfStringType, ElementType: types.StringType, Optional: true, PlanModifiers: []planmodifier.Map{ mapplanmodifier.RequiresReplace(), }, }, names.AttrARN: framework.ARNAttributeComputedOnly(), names.AttrDescription: schema.StringAttribute{ Optional: true, }, "integration_name": schema.StringAttribute{ Required: true, }, As shown above, the parameters marked as required in the SDK reference should be set with Required: true , and if a change requires replacing the resource, add a RequiresReplace() modifier. Personally, I found it challenging to choose the appropriate modifier. Modifiers can be implemented on your own, so I decided to implement one manually, but later found that a maintainer had replaced mine with an existing built-in modifier after creating the PR. If you’re unsure, it is a good idea to first understand the functions provided by the modifier that corresponds to the target type, such as stringplanmodifier , and check whether you can use them. Through the maintainer’s feedback after submitting the PR, I learned that most cases can actually be handled using existing modifiers. Along with that, I also defined the ResourceModel struct. type integrationResourceModel struct { AdditionalEncryptionContext fwtypes.MapValueOf[types.String] `tfsdk:"additional_encryption_context"` Description types.String `tfsdk:"description"` IntegrationARN types.String `tfsdk:"arn"` IntegrationName types.String `tfsdk:"integration_name"` KMSKeyID types.String `tfsdk:"kms_key_id"` SourceARN fwtypes.ARN `tfsdk:"source_arn"` Tags tftags.Map `tfsdk:"tags"` TagsAll tftags.Map `tfsdk:"tags_all"` TargetARN fwtypes.ARN `tfsdk:"target_arn"` Timeouts timeouts.Value `tfsdk:"timeouts"` } Implementing CRUD handlers and related logic All CRUD handlers are implemented by creating an input struct for the SDK and calling SDK functions. You’ll also implement the functions used in the CRUD handler. This includes the following: A finder function to retrieve the resource information A waiter function to wait for create, update, or delete to complete A status function to check the state of the resource A sweeper function to remove all resources (mainly for testing; not always required) Please note that some services have their own Go files such as wait.go or find.go. In that case, you need to add your logic there. If not, it seems fine to include all the logic in the file you’re working on. The Redshift service I used already had wait.go, so I added the relevant logic there. Registering resources Once the implementation is complete, you need to register the resource so that the Terraform Provider can recognize it. The following annotations are required, but since it’s already included in the code generated by Skaff, you don’t need to write it yourself. Just be careful not to delete it by mistake. // @FrameworkResource("aws_redshift_integration", name="Integration") func newIntegrationResource(context.Context) After writing the above annotations, run make gen in the project root directory. This will regenerate service_package_gen.go in each service package, and your newly implemented resource will be registered with the provider. Once you have reached this stage, you can run make build , and if it passes, you will be able to check that it works using commands like terraform apply . Verifying operation Write the newly implemented resources in HCL and run terraform apply to check that it works. In step 4. Validate the target resource and code dependent resources, the dependent resources have already been coded, so here you can define only the newly implemented resource in a separate file under a different directory, and manage it with a separate state. This way, you can apply and destroy only the resource you’re checking to see if it works, which helps speed things up. Alternatively, if everything is written in a single file, you can still apply just the new resource by specifying a target, like this: terraform plan -target=new_resource_type.hoge -out=myplan terraform apply myplan 7. Write a Test In terraform-provider-aws, there are three tests: Acceptance Tests These verify that Terraform can successfully create, update, and delete AWS resources. Since resources are actually operated, monetary costs are incurred. That’s why the documentation states that running them is optional. Unit Tests Function level tests. In this case, I judged that they weren’t necessary and skipped them. CI Tests Comprehensive testing including linting, formatting, and other checks after PR is created. Since CI tests only run what is already prepared, acceptance tests and unit tests are the tests that should be written by the contributor. Unit tests are recommended when implementing complex logic, but since that wasn’t the case this time, I judged they weren’t necessary and wrote only the acceptance test. For acceptance tests, the AWS resources needed for testing must be written in HCL, as shown in the code below: func testAccIntegrationConfig_base(rName string) string { return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 3), fmt.Sprintf(` data "aws_caller_identity" "current" {} data "aws_partition" "current" {} resource "aws_security_group" "test" { name = %[1]q vpc_id = aws_vpc.test.id ingress { protocol = -1 self = true from_port = 0 to_port = 0 } ... Since the dependent resources were already written in code in step 4. Validate the target resource and code dependent resources, this step was very easy with simple copy-and-paste. When running tests, you can execute them at the function level by specifying the function name, like this: make testacc TESTS=TestAccRedshiftIntegration_basic PKG=redshift To run all tests for a specific resource at once, delete the part after the underscore and run it like this: make testacc TESTS=TestAccRedshiftIntegration PKG=redshift 8. Run a Continuous Integration Test Locally The terraform-provider-aws repository has a strict CI pipeline to ensure code quality. These checks run automatically after creating a PR, but it's a good idea to run them locally first and make sure everything passes before submitting. A complete check can be run with make ci , but in my case, it took several hours to complete. So, I recommend first fixing any issues detected with make ci-quick and then running make ci to minimize the wait time. For me, after a few rounds of fixes, I was able to pass all checks with make ci-quick locally. But when running make ci , I encountered one issue that required modifying the GNUmakefile. Since this may be a problem specific to my environment, I didn’t include it in the PR and instead worked around it with a local fix. As described in the Documentation , running make testacc-lint-fix first can automatically fix issues only related to terrafmt , so that’s a good step to begin with. 9. Update the Documentation Update the documentation generated by Skaff. What you write here will be reflected as Frequently viewed documentation . There shouldn’t be any issues if you refer to existing documentation and follow their format. 10. Create a Pull Request This step should be pretty straightforward and not cause any confusion. 11. Create and Push a Changelog I think you can create it without any problems by following the official documentation . The PR number is required according to the file naming rule, so you need to submit a PR first, then create a changelog and push it afterward. That covers the steps up to creating a PR. In the next section, I’ll share the insights I gained through this initiative. Changes Made by the Maintainer The PR was successfully merged and released in v5.95.0 recently, and the feature is now available for use. Before the merge, the maintainer made some revisions to the code. Here’s an overview of what those changes were: Removal of the ID from schema.attribute Although the following comment was already included in the code generated by Skaff, I overlooked it and left the ID attribute, so it was removed as unnecessary. It’s a good idea to refer to the AWS API reference to decide whether to keep it or not. // Only include an "id" attribute if the AWS API has an "Id" field, such as "IntegrationId" names.AttrID: framework.IDAttribute(), Changes to variable names, etc. This was the majority of the changes, and I realized my naming needed more attention. On the other hand, the struct name "resourceIntegrationModel" was automatically generated by Skaff, but it was modified to "integrationResourceModel." This might indicate that Skaff’s naming logic isn’t entirely consistent. Replacing my custom modifier with an existing one To address a specific issue, I implemented my own plan modifier, but it was modified to an existing one. Since I wasn’t fully confident about this part, I left a detailed comment in the PR . In response, I received the following feedback, which made me realize I should have looked more closely into the existing modifiers beforehand. However, by clearly explaining why I implemented it the way I did, the maintainer was able to make an informed correction. This can be accomplished with the RequiresReplaceIfConfigured plan modifier. To see whether this fix could have been guided by an LLM, I modified the prompt I was using during implementation and sent it to LLM , and this time, the LLM suggested a fix using the existing modifier. During development, I had assumed that I had no choice but to create my own modifier, and gave the LLM overly specific instructions, which may have limited its ability to suggest a better solution. This experience taught me that there’s room to improve how I use the LLM. Addition of Check Items in Acceptance Tests As noted in this commit , I learned that acceptance tests can be written to specify whether a test scenario is expected to "create or update a resource." This helps detect unintended resource recreation, which can be very useful. Cost of Creating AWS Resources Since I ran the acceptance tests myself and also ran individual checks to see if it works, some monetary cost was incurred from creating AWS resources. I used Terraform to manage the infrastructure as code (IaC), and destroyed resources frequently when they weren’t needed. Still, the total came to about $50. Most of this was the cost of Redshift, which will significantly vary depending on the resources you’re implementing. Other Thoughts Lesson learned: a huge effort goes into standardization In repositories like those related to Terraform, which involve thousands of contributors, it’s essential to have a solid "track" that allows everyone to reach the same goal. If standardization is weak, maintainers (reviewers) have to put in a lot more effort, and that slows down feature releases. Given this background, I really felt a strong push toward code standardization by providing various resources and tools like: Extensive documentation Detailed guides for each type of contribution (bug fixes, adding new resources, etc.) Description of rules such as naming Scaffolding using the dedicated tool "Skaff" Automatic generation of base code that can be easily fixed Locally run CI tests Thorough checks can be performed from various perspectives, including lint, formatting, and testing. By getting everything to pass locally first, there’s a high chance that your code will pass all the CI checks after you open the PR, reducing the burden on maintainers. In particular, you can really see the effort put into enabling local execution of CI-equivalent tests in the documentation below. NOTE: We’ve made a great effort to ensure that tests running on GitHub have a close-as-possible equivalent in the Makefile. Japanese translation 注: GitHub で実行されるテストについては、Makefile に可能な限り同等のコードが含まれるよう最大限の努力を払っています。 This helps minimize inconsistencies in code style, even down to the smallest details. For example, as shown below, if a value is hardcoded instead of using a const-defined constant , the system prompts you to use the appropriate constant. As you can see, the test items are very detailed and cover a wide range, but on the flip side, once both the acceptance tests and local CI tests pass, I was able to create my very first PR with confidence. In the DBRE team I belong to, DevOps specialists had already structured the entire development flow from scaffolding to formatting, linting, and testing as described above. Thanks to that, I was able to follow the process smoothly. Reflection: there is room for improvement in the use of generated AI Looking back, I realize there was room for improvement in how I used generative AI. To speed up my understanding of an unfamiliar repository, I could have indexed it with GitHub Copilot. That said, in cases like this one where the repository contains a mixture of code from different SDKs, I realized it's important to be more deliberate, such as clearly specifying the currently recommended SDK when asking questions. In fact, I looked into the Plan Modifier area through deep research and tried a solution I found in an issue online. However, it didn’t work because the solution was based on the old SDK. Instead, I fed the LLM with a set of relevant sources, and it returned code that resolved the issue with almost no modification. I hope to leverage LLMs more effectively to stay up to date and accelerate development. Challenges: mixed code from different SDKs As mentioned above, the repository contained a mix of code with different SDKs, so "not all existing code could be used for reference." It took me a while to realize this. For example, the implementation of the sweeper function differs between the current SDK (Terraform Plugin Framework) and the previous one. In this case, the target service was Redshift, but the file for implementing the Redshift sweeper function hadn’t yet been updated to use the current SDK. I based my initial implementation on the old SDK, which resulted in non-working code. I solved the problem by finding functions implemented with the current SDK in another service and using them as a reference. That said, it’s best to be mindful of whether the existing code you’re referencing follows the current recommended SDK conventions. Dividing Tasks Between AI and Humans Lastly, I’ve summarized my current perspective on which steps are better handled by AI or humans in the table below. After completing this development, I also had the AI engineer Devin try the same task for validation purposes, but as written in the official documentation it seemed necessary to break down the task into smaller steps when delegating to AI. Of course, this is just my current view, and is likely to change as AI evolves. Step AI / Human Notes 1. Investigate or Create Related Issues Human Fastest to search manually via web or GitHub Issues 2. Preliminary Research on the Relevant AWS API and SDK Human Quicker to look it up manually 3. Configure Development Environment Human Quicker to set it up manually 4. Validate the Target Resource and Code Dependent Resources AI + Human Using LLMs is effective for coding dependencies 5. Generate boilerplate code using the scaffolding tool Human Quicker to run manually 6. Modify the Code and Check If It works AI + Human Let the LLM draft the base, then finish the details manually 7. Write a Test AI + Human Let the LLM draft the base, then finish the details manually 8. Run CI tests locally AI or Human LLM may adjust code to pass tests, but long test run times may consume more credits depending on the product 9. Update the Documentation AI + Human Feed merged document to LLM to generate a draft 10. Create a Pull Request Human Likely faster to handle manually 11. Create and Push a Changelog Human Likely faster to handle manually Conclusion Contributing to the Terraform Provider seemed like a high hurdle to overcome, but I found that once you get used to it, the process goes smoothly—thanks to well-maintained guides, scaffolding tools, and a solid testing framework. Since this was my first time, I spent a lot of time reading through the documentation, but I believe I'll be able to develop faster next time. If you're excited to Terraform new AWS features as soon as they are released, I definitely encourage you to give it a try. I hope this article can be a helpful reference when you do. KINTO Technologies' DBRE team is actively looking for new members to join us! Casual interviews are also welcome, so if you're even slightly interested, feel free to contact us via DM on X . Don't forget to follow us on our recruitment X too!
アバター
1. イベント概要 2025年7月11日、12日に5回目の開催となるSRE NEXTが開催されました。弊社はプラチナスポンサーとして、企業ブースの出展とスポンサーセッションへの登壇をしました。 数多くの素晴らしいセッションに加え、スポンサーブースや書籍コーナーにて多くの方々と交流させていただくことができ、非常に貴重な2日間を過ごすことができました。 本記事では、今回が初出展となったKINTOテクノロジーズのメンバーとイベントを振り返る座談会をした結果についてお伝えします。 2. KINTOテクノロジーズとSRE 2-1. どんな組織 KINTOテクノロジーズはトヨタグループ初の内製開発組織としてクルマのサブスクKINTOを始め、コンシューマー向けのモビリティ関連サービスのシステム開発や保守運用をしています。2025年7月現在で約400名のエンジニア、デザイナー、プロダクトマネージャーなどが在籍しており、社内外に提供するサービスを開発しています。 このような組織の中でSREチームはプラットフォームを担当する部署の1つのチームとして、プロダクトチームと連携して信頼性の維持向上や開発者への支援を行っています。 2-2. SREの現状 当日のスポンサーセッションにて長内が発表しましたが、KINTOテクノロジーズでは横断組織が充実しており、クラウドインフラエンジニア、DBRE、プラットフォームエンジニアリング、セキュリティ専門部隊、CCoEおよびファイナンス連携する部隊など、多くの企業でプラットフォーム系SREsが担っているであろう責務の多くを複数のチームで分担しています。 当日の登壇資料はこちら👉 ロールが細分化された組織でSREは何をするか? - Speaker Deck SREingの実践を推進する2名のエンジニアはプロダクト開発チームと連携してプラクティスの実践を試みていますが、サービスレベルをビジネス指標や開発プロセスと結びつける難しさや、チームトポロジーにおけるプラットフォーム・パターンでのアプローチの難しさを感じながらも、自分たちができる価値提供のありかたを試行錯誤し続けています。 2-3. 出展のモチベーション KINTOテクノロジーズでは2022年にテックブログチームを立ち上げ、2023年にはテックブログ"チーム"から技術広報"グループ"へとステップアップし情報発信に力を入れました。 2024年にはカンファレンスのスポンサー活動を開始し、最近でも開発生産性カンファレンスに代表の小寺が登壇したり、さまざまなジャンルのカンファレンスに協賛したりと、エンジニアコミュニティを支援しています。 エンジニアたちが直接コミュニケーションを取れるカンファレンスという機会はこの界隈の魅力だなと感じており、この機会に携われていることを嬉しく思っております。 KTCのSREの領域はメンバーが少なく、これからの成長を目指すフェーズなので、まずはKTCのSREの存在を知ってもらうこと、 そしてロールが細分化されているといったKTCならではの環境下において、我々ならではの苦悩や取り組みを共有することで、同じような課題に取り組む方々への参考となればというモチベーションでスポンサーセッションを行うことにしました。 3. 当日の動き 3-1. ブース運営 弊社は来訪者のみなさんに「あなたの”NEXT”は?」というテーマで付箋を貼ってもらい、ご協力いただいた方にはガチャガチャを回してノベルティをプレゼントをしていました。KINTOのマスコットキャラクターである「くもびぃ」のぬいぐるみ(大/小)や、トヨタ車のトミカをノベルティの1つとして提供していましたが、みなさんにとても好評でした。 スポンサーブースで提供したノベルティ ブース運営1日だけでボードが埋まるほどの”NEXT”を皆さんに記載いただき、参加者の方々と今年のテーマでもある「Talk Next」を一緒に体験することができました。 訪問いただいた方々に多くの”NEXT”を記載いただきました 3-2. 登壇 弊社からはスポンサーセッションとして、SREチームの長内が「ロールが細分化された組織でSREは何をするか?」というタイトルで発表しました。初めての外部登壇ということで非常に緊張する様子が伺えましたが、日々悩みながらも地道に取り組んだ成果ということもあり、本人も納得のいく発表ができたようです。 初めての外部登壇で緊張している長内 登壇後はどのような反応をいただけるか非常に不安でしたが、幸運にも数多くの方にAsk the speakerの場に訪問いただき、20分の発表時間には入れられなかった裏話なども含めて楽しくお話しさせていただきました! Ask the speaker の様子 3-3. 新しい学び 弊社は若手エンジニアも多く、外部イベントへの参加に慣れていないメンバーも数多くいます。今回のイベントはそういったエンジニアの刺激になる体験も多く、「詳解 システム・パフォーマンス」の著者であるBrendan Gregg氏を始め、著名なエンジニアの方々と交流できたのは非常に貴重な機会となりました。 Brendan Gregg氏とのツーショットに興奮する若手エンジニア また、クラウドエンジニアとしてキャリアをスタートした若手エンジニアは、物理ネットワークを支える技術には疎いという課題があったのですが、会場でディスプレイされていたネットワークルーターやスイッチなどの役割について非常に分かりやすく解説いただくような機会もあり、技術力向上に直接的に役立つような経験もすることができました。 物理ネットワークを知らないクラウドエンジニアがルーターやスイッチについて教えてもらっている風景 3-4. 参加者との交流 今回はスポンサーとして、KINTOやKINTOテクノロジーズを多くの方に知っていただくことを目的に参加しましたが、実際にはそれ以上に、参加者の皆さんとの交流から得られた刺激や学びが何よりの収穫になりました。 訪問者と歓談する運営メンバー 4. 参加メンバーによる座談会 前述のようなとても楽しい二日間を過ごした運営メンバーにて、振り返りの座談会をしてみました。 SRE: 長内 、kasai / クラウドインフラ: こっしー 、 白井 / 技術広報: ゆかち オフィスで座談会をする運営メンバーたち 4-1. 何が一番印象に残ってる? kasai「ずっとブースにいたのでセッション見れてないですが、ブースで来てくれた人と話してて、SREの仕事をする中でどう生成AIを使っていくかということに悩まれてる方が何人かいたのが印象的でした。」 長内「自分の発表に興味を持って聞きに来てくれた人がいたのが、すごく嬉しかったです。その後のAsk the speakerでも直接話しに来てくれる人がいて、本当にありがたいなって思いました。」 白井「参加者全員のイベントを絶対成功させようという熱量が1番でした。Talk Nextということで、みなさんがノウハウを共有しあい、互いにリスペクトを持って話している姿が良いなーと感じました。運営の方にはSRE NEXTを作り上げてくださったことに感謝しつつ、チャンスがあれば運営側として参加させていただきたいなと思いました。」 ゆかち「今回運営協力してくれたみんなのコミュ力の高さですかね!それぞれの人柄が出ており、楽しそうにブース対応しているのを見ていて嬉しかったです。せっかくなのでXにポストしたハイライトもみて欲しいです(笑)」 @ card こっしー「私はコミュニティの熱量の高さが一番印象に残ってます。真剣にセッションを聞く方もいれば、色んな場所で楽しそうに交流する方々もいて、同じテーマで日々悩む方々が経験を共有する場としてとても良い場所だなって思いました。」 4-2. 初の外部登壇どうだった? To 長内さん 長内「知らない人の前で何かを発表したのって、もしかすると小学生の時のピアノの発表会以来かもしれないです…(笑)」 こっしー「今回の登壇にはどんなモチベーションがあったんですか?伝えたいことがあるとか、ここは皆とシェアしたいとかそういうものがあったんですか?」 長内「最初のモチベーションとしてはまずKTCのSREという存在を認知してもらおうっていうのがメインでした。じゃあそのために何話そうって考えてたんですけど、スポンサーが決まった時点ではこれだ!って思えるものがなくて… でも登壇することが決まった以上は聞いてくれる人に何かしら刺さるネタを話したいよねってなって、その中で今回の発表にもあった改善ツールの案とかも出てきて、そこからアウトラインが徐々に決まっていきましたね。そこが決まってからは話したい内容で情報量が不足している部分を追加で集めつつ、今までやってきたことも繋げていく感じで。登壇をきっかけに、自分たちの今後やっていくこともある程度見てきたこともあって、登壇駆動でかなり成長できた気がします。」 こっしー「ブースでもKTCさんの発表良かったですって言って頂く方も多かったんですが、Ask the speakerではどういった質問がありましたか?」 長内「発表の中にあったNew Relic Analyzerがどのような仕組みで動いているのかだったり、Devinの提案の精度を上げるためにどのような取り組みをしていきたいかなど、発表のことだけでなく足を運んで頂いた方の課題感なども交えて色々なことをお話しできました。それと、以前一緒に働いていた方も足を運んでくれて、当時の話もしつつ互いの近況を伝え合う良い時間になりました。」 こっしー「同じような領域で悩みを抱えている企業さんとか、やろうとしているけどやれないような障壁に対してどうやってアプローチする?みたいな質問があったりしたんですね。」 長内「そうですね。やっぱり皆さん似たような悩みを抱えているんだなというのを実感しました。」 ゆかち「そういえば登壇を私の隣で聞いていた人が1日目にブースに来てくれていた方だったので、登壇後に声かけてみたら福岡に住んでる方で、7月に福岡拠点できたんです〜!というお話から福岡で開催するイベントに招待できたんですよ! 長内さんの登壇を聞いた上で弊社に興味持ってもらえたようなので、すごい嬉しかったです!」 長内「SRE NEXTの2日後に面接する方も来てくれてて、発表も聞いてもらったことでよりKTCのことを理解してもらえたんじゃないかなって思いました。」 こっしー「初の外部登壇、緊張したけど、想定してなかったこととか、イメージしきれてなかったことも特になかった?」 長内「本当は3日前くらいから何も食えなくなるくらい緊張してる想定だったんですけど、意外と緊張しないなと思って。結構ご飯食べれるじゃんってなってました。」 ゆかち「初めてだし、とちゃんと念入りに準備してたからなんだろうね。」 長内「そうなのかもしれない。意外と前立っても、みんなが見える位置にバーって座ってくれてたのもあるし、発表中もこっしーさんのカチューシャに付いてるくもびぃと目が合ったりして、自分としてはリラックスして喋ったつもりでした。ただ、写真を見たらめちゃめちゃ険しい顔してて、こんな顔してたんだ俺…って思いました(笑)」 こっしー「直前めっちゃ目が血走ってたよね。僕は長内さんは全然喋れるかなと思ってたけど、みんなが煽るし緊張してる感じになってるから、始まる直前こっちがドキドキしてきて(笑) でも、意外と安定してたし話の内容も隣のチームとしても勉強になるものとか、そのアプローチすげぇみたいなものがいっぱいありました。」 ゆかち「あの日ちょっと後悔したのが、発表前にみんなで前行って背中叩きにいけば良かったなって(笑) こっしーさんや白井くんが登壇するってなっても心配はないんですけど、長内さんって今まで外部登壇経験もないし、顔がこわばっているのもあってすごい心配でした(笑) でも話し出したら安定していて、なんか感動しちゃいました(笑)」 こっしー「結果、やって良かったと!」 長内「次回以降の課題は表情管理ですね(笑)」 4-3. Talk Next 次に何やる? kasai「今改善ツールを作ってるんですけど、それはやり切りたいと思ってますし、それをやる過程で喋れることがさらに増えると思うので、それをまた外部に発信していけたらいいなと思っています。」 長内「自分としても改善ツールの品質や提案の精度を上げるというものもありますが、やっぱりこういったツールは使ってもらう人に興味を持ってもらわないことには始まらないので、開発を続けつつ、色んなプロダクトの人たちへの普及活動ということもやっていきたいです。あとはサービスレベルの部分もエンジニア内で決めようとするとうまくいかなかったという結論にしましたが、事業側の人たちと会話して、どれくらいの品質が必要かといったことも話せるようになっていきたいですね。」 白井「今回のイベントを通じてカンファレンスの運営などに携わって色んな人とのネットワークを広げたいと思ったのと、もっと開発者目線で使いやすいプラットフォームを作っていくぞというモチベーションに繋がりました。」 ゆかち「白井くん今回初めてブースに立ってもらったけど、そうやって言ってもらえるといいきっかけになったな〜と思えて嬉しい!今回は、粟田さんやこっしーさんがSRE界隈で知り合いが多くて、KTCを知ってくれている人が多かった気がしていて、みんながそうやってネットワークを広げていってくれることでKTCの知名度も上がっていくし、何より知り合いが増えれば増えるほどカンファレンス参加が楽しくなるので、もっとみんなにも前のめりに参加していって欲しいな~と思いました!」 こっしー「もっと社外の皆さんとコミュニティを盛り上げていけるようにしたいし、そのために社内での文化作りとかプラクティスを実践していきたいですね。」 5. まとめ 5-1. 学んだこと 今回のSRE NEXTでは各社の発表や参加者の方々との交流を通じ次のようなことを学びました。 同じような課題感を持っていることも多いが、会社の数だけアプローチがあり、似たアプローチでもその結果は様々である エンジニアリングだけでなく、ビジネスや組織といった観点からもSREのアプローチを考えることが大切である プロダクトチームとの信頼関係作りがSREの活動に大きな影響を与えるという話が多く、日々のコミュニケーションの重要性を再認識した 5-2. KTCのSREの「NEXT」 これらを踏まえ、KINTOテクノロジーズのSREは次のようなことに挑戦したい(目指したい)と考えています。 改善ツールの更なる発展と普及活動 事業側に越境した妥当性のあるサービスレベルの策定 得られた知見を社内外に発信し、コミュニティの活性化に貢献する 6. さいごに SRE NEXTの運営の方々をはじめ、ブースに来ていただいた方、セッションを聞いて頂いた方、弊社メンバーと交流して頂いた方、大変ありがとうございました。 初めてのSRE NEXTスポンサー、すごく良い経験になりました。今後もSREingの実践と試行錯誤に励み、新しい学びの共有をできる機会を楽しみにしております! KINTOテクノロジーズから参加した運営メンバー 仲間募集中 KINTOテクノロジーズでは、モビリティプラットフォームを一緒に作る仲間を募集しています。ぜひ採用サイトもご訪問ください! 👉 KINTOテクノロジーズ株式会社 採用情報
アバター
Introduction Hello! I'm Kin-chan from the Development Support Division at KINTO Technologies. I usually work as a corporate engineer, maintaining and managing IT systems used throughout the company. In this article, I'd like to share an initiative I've been working on to promote Agile practices across teams and departments within the company. If you're someone who's passionate about starting something new from the ground up and driving it forward, I hope this will be helpful and encouraging. *This article is part of a series on the theme of "Agile." In order for us to become "agile as an organization," we have tackled all sorts of challenges and difficulties. Although there have been failures at times, we have continued to grow steadily. In this series of articles, I would like to introduce some of our actual efforts. Background I joined KINTO Technologies in January 2023. Having been involved in various Agile-related activities both inside and outside the company in the past, I joined KINTO Technologies with a strong desire from the start to connect with the in-house Agile experts across different teams. In my experience so far: Involved in software development teams as a Scrum Master and Scrum Coach in the company Promoted business improvement initiatives centered on Agile in the administrative department Helped build a community of practice by regularly sharing ideas and activities with other in-house practitioners Regularly participated in external Agile communities and conferences ...and so on. Though I had that desire in mind, once I actually started working in the Corporate IT Group, my first impression was that the product development team was farther away than I had thought. That feeling came not just from the "organizational distance" between the product development side and the corporate side, but also from the physical distance. I'm based in the Nagoya Office, but most of the engineers working on product development are in Tokyo. That physical separation played a big part. That sense of distance proved to be quite a hurdle, and since I'm naturally a bit socially awkward, I couldn't actively interact with people in other departments for a while after joining the company. So, I spent my days as a serious corporate engineer quietly holding on to an agile mindset. How it started In the Development Support Division where I work, we have regular one-on-one meetings with our managers. About two months after I joined, I brought up with my then-manager (who's now the department head) that I wanted to connect with some of the Agile experts in the company. During that conversation, my manager mentioned several names, but the one who matched best was Kinoshita-san, who had taken Scrum Master training not too long ago. Kinoshita san is an engineer, a member of the company's Tech Blog team, and also the writer of a post on becoming a Licensed Scrum Master (LSM) " I had actually read Kinoshita-san's articles before joining KINTO Technologies, so I told my manager I'd really love the chance to connect with him. Thanks to that, I was given the opportunity to interact with the Tech Blog team. Meeting The Tech Blog Team When I first met team members of the Tech Blog Team, my honest impression was, "They seem like a fun and unique group of people." Even though each member has a different main job, they all actively contribute to growing this shared product called the "Tech Blog." Through that, they've been able to connect with people across the organization and build a kind of cross-functional momentum. To me, this was "one form of an ideal internal community." The Tech Blog team actually started from one passionate person named Nakanishi san taking action. After interacting with the team a few times, I found myself thinking, "I want to help spread a positive culture within the company together with everyone." Then came a turning point. Some Tech Blog team members mentioned, "Kinoshita-san's articles have been consistently getting solid page views, so Agile-related posts could really take off too" That comment, paired with the fact that I've always had a strong passion for Agile, sparked the idea to launch a Tech Blog series focused on Agile. So, Where to Begin? The idea of an "Agile Series" sounded great at first, but once I actually sat down to think about it, I realized I didn't really know anything about what Agile activities at KINTO Technologies even looked like. That meant, I had pretty much zero content to start with. Therefore, the first step for me was to be referred to some experts. With the help of the Tech Blog team's network, I was able to connect with: Someone who previously obtained their Licensed Scrum Master certification within the company Someone at KINTO Technologies who's about to dive into Scrum, drawing on their experience from their previous job Someone who's planning to take the Licensed Scrum Master training course soon The team helped set up some great opportunities to connect, and from there, a natural flow of conversations and interviews about "Agile at KINTO Technologies" started to take shape. At first, I couldn't help but feel a kind of emotional distance, because of the physical distance between Nagoya and Tokyo. But by this point, that had faded and distance no longer really mattered. I started to see everyone as friends who just happen to be a little farther away. Taking the very first step always feels tough. But I came to realize that once you find even the tiniest push to move forward, your body naturally follows. Next Episode That's it for this time. In the next article, I'll talk about what happened as the story continued to unfold. Being able to interact with internal experts and directly feel their thoughts on Agile Getting the chance to join an actual Scrum event and see the energy of the scene up close Being able to talk about that all-too-familiar "Agile reality" where things don't always go smoothly How the interview led to starting an internal meetup for Agile experts to connect I plan to share those experiences. This Agile Series will mainly spotlight Agile at KINTO Technologies. Along the way, I'll be introducing various things step by step, like experts actively working within the company, their team members, and even Agile practices I've come across outside of software development. I Hope you're excited for what's coming next!
アバター
はじめに こんにちは! KINTOテクノロジーズのデータ戦略部DataOpsG所属の上平です。 普段は社内のデータ分析基盤と「cirro」というAIを活用した社内アプリの開発・保守・運用を担当しています。 「cirro」では、AIにAmazon Bedrockを利用しており、Bedrockの呼び出しにはAWSのConverse APIを使用しています。 本記事では、「cirro」にツールや子エージェントの機能を実装するために、ローカル環境でStrands Agentsを検証した事例をご紹介します。 本記事の対象者 本記事は、Amazon BedrockをConverse APIやInvoke Model経由で利用した経験のある方を対象としています。 Strands Agentsとは 2025年5月16日にAWS Open Source Blogで公開されたオープンソースのAIエージェントSDKです。 以下は、AWSのAmazon Web Services ブログで公開されている図です。 図のように、ツールを備えたAIを実装するには、Agentic Loopと呼ばれるループ処理が必要です。 この処理では、AIの応答がユーザーへの回答なのか、ツールを使ってさらに処理を進めるべきかを判断します。 Strands Agentsを使えば、このループ処理を開発者が自前で実装することなく、AIエージェントを構築できます。 参考、図の出典:Strands Agents – オープンソース AI エージェント SDK の紹介 ローカル環境でStrands Agentsを動かす! ※本セクションは、過去にConverse APIなどを用いてBedrockを利用した経験がある方を前提としています。 そのため、モデルのアクセス許可設定などの基本的な手順については説明を省略しています。 また、サンプルのため例外処理も省略しています。 準備 ライブラリ 以下のコマンドで、ライブラリをインストールします。 pip install strands-agents strands-agents-tools 実行① (運のいい方は・・・)最短下記のコードで動きます。 from strands import Agent agent = Agent() agent("こんにちは!") 多くのブログなどではこのコードが紹介されていますが、私の環境ではうまく動きませんでした😂 それはそうですよね・・・モデルもBedrockを呼び出すリージョンも指定していないので・・・ 実行② モデルを正しく呼び出すためには、以下のようにモデルとリージョンを指定する必要があります。 ここでは、弊社のようにSSOでログインし、スイッチロールによって権限を取得する環境を前提としています。 【ポイント】 呼び出すモデルとリージョンをロールが呼び出せるものに設定する。 例:anthropic.claude-3-sonnet-20240229-v1:0(モデル)、us-east-1(リージョン)※リージョンはセッション作成時のプロファイル内で指定しています。 import boto3 from strands import Agent from strands.models import BedrockModel if __name__ == "__main__": # セッション作成 session = boto3.Session(profile_name='<スイッチ先のロール>') # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.amazon.nova-pro-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, # Trueにするとストリーミングで出力される。 # ストリーミングでツール利用がサポートされないモデルがあるため、OFF streaming=False ) # エージェントのインスタンスを作成 agent = Agent(model=bedrock_model) # 質問を投げる query = "こんにちは!" response = agent(query) print(response) ここまでで、Converse APIと同様に temperature などのパラメータを指定してBedrockを呼び出すことができるようになりました🙌 でも、Strands Agentsを使うなら…やっぱり ツールを呼び出したい ですよね! 実行③ 下記のようにツールを定義すれば、質問に応じてツールを使用し、Agentic Loopを実行した後の回答を出力してくれます。 【ポイント】 ツールとしたい関数を「@tool」でデコレートしてます。 ツールは Agent(model=bedrock_model, tools=[get_time]) で、関数の配列として渡しています。 import boto3 from strands import Agent from strands.models import BedrockModel #------ツール用に読み込んだライブラリ------------ from strands import tool from datetime import datetime # ツールの定義 @tool(name="get_time", description="時刻を回答します。") def get_time() -> str: """ 現在の時刻を返すツール。 """ current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") return f"現在の時刻は {current_time} です。" if __name__ == "__main__": # セッション作成 session = boto3.Session(profile_name='<スイッチ先のロール>') # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.amazon.nova-pro-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, # Trueにするとストリーミングで出力される。 # ストリーミングでツール利用がサポートされないモデルがあるため、OFF streaming=False ) # ツールを使用するエージェントのインスタンスを作成 agent = Agent(model=bedrock_model, tools=[get_time]) # 質問を投げる。ツールを使用しないとAIは時刻が判別できない。 query = "こんにちは!今何時?" response = agent(query) print(response) 私の環境では下記回答を得ることができました! <thinking> 現在の時刻を調べる必要があります。そのためには、`get_time`ツールを使用します。 </thinking> Tool #1: get_time こんにちは!現在の時刻は 2025-07-09 20:11:51 です。こんにちは!現在の時刻は 2025-07-09 20:11:51 です。 応用 ツールについて、今回ロジックベースの処理を返すだけのツールでしたが、 例えばツール内でAgentを作成し、回答をチェックさせるなどの処理を組み込めば、 AIがAIを呼び出す マルチエージェント な仕組みが簡単に作れます。 時刻に加え、子エージェントがトリビアも返すように、ツールを修正したコードは以下です。 【ポイント】 if __name__ == "__main__": で宣言したグローバルスコープの session を使いまわしています。 これをしない場合、私の環境ではモデル設定に1分程度オーバーヘッドが発生しました。 おそらくは何らかの資源確保で時間がかかってしまうのでは…と思います。 @tool(name="get_time", description="現在日時と、日時にちなんだトリビアを回答します。") def get_time() -> str: """ 現在の時刻を返すツール。 注意:この関数では boto3.Session を使った BedrockModel の初期化に グローバルスコープで定義された `session` 変数が必要です。 `session` は `if __name__ == "__main__":` ブロックなどで事前に定義しておく必要があります。 """ current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.anthropic.claude-sonnet-4-20250514-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, streaming=False ) agent = Agent(model=bedrock_model) # ここが子エージェントから回答を得る部分! response = agent(f"現在の時刻は {current_time} です。日時と日付にちなんだトリビアを1つ教えてください。") return f"現在の時刻は {current_time} です。{response}" 最終的なAIの回答は以下にりました。 こんにちは!現在の時刻は 2025-07-10 18:51:23 です。今日は「納豆の日」です! これは「なっ(7)とう(10)」の語呂合わせから制定されました。1992年に関西納豆工業協同組合が関西での納豆消費拡大を目的として始めたのがきっかけです。 面白いことに、納豆は関東では古くから親しまれていましたが、関西では苦手な人が多く、この記念日も「関西で納豆をもっと食べてもらおう」という願いから生まれたんです。現在では全国的に「納豆の日」として認知されており、この日にはスーパーなどで納豆の特売が行われることも多いですよ。 夕食の時間帯ですし、今日は納豆を食べてみるのはいかがでしょうか? 備考 マルチエージェントは比較的簡単に実装できますが、 実際に試してみたところ、AIを複数呼び出す分だけトークン数と応答時間が増加するため、使いどころに悩むところです。 以下は、親エージェントと子エージェントを用いた際の処理コストの内訳です。 区分 親エージェント 子エージェント 全体 入力トークン 1086 54 1140 出力トークン 256 219 475 処理時間 7.2秒 7.3秒 14.5秒 このように、 子エージェントの応答が加わることで全体の処理時間が倍増 していることがわかります。 そのため、マルチエージェントの活用は、 出力の多様性が求められたり、ロジックベースでは対応が難しい複雑なタスク に限定するのが現実的かもしれません。 おわりに 今回は、データ戦略部で展開しているAI活用システム「cirro」を拡張するために、 Strands Agentsを検証した際の“動かすためのポイント”をご紹介しました。 意外とハマりどころが多く、実際に動かす際の参考になれば幸いです。 Strands Agentsを使うことで、ツールや子エージェントによる機能拡張が容易になります。 一方で、処理時間やトークン数の増加、システム組み込み時の権限管理など、課題も見えてきました。 なお、記事内で触れた「cirro」は、Pythonで開発された完全サーバレスなシステムで、 ユーザー自身がタスクや参照データを柔軟に拡張できることが特徴です。 現在は、ダッシュボードの案内やアンケート分析などに活用しています。 こちらについて、AWSの紹介記事はありますが、いずれ詳しくご紹介できればと思っています! AWSのcirroの紹介記事
アバター
1. Starting Point: Overview Nice to meet you! I'm YOU, an infrastructure architect in the Cloud Infrastructure Group at KINTO Technologies. I joined the company this January, and this is my first post on the Tech Blog. I’m excited to share more in the future! I started my AWS certifications with SAA in October 2023 and completed MLA in February 2025, achieving all 12 AWS certifications in 1 year and 4 months. I'd like to take this opportunity to share my personal thoughts and information I picked up while working toward the 12 certifications. First off, by "12 AWS certifications," I mean every certification that AWS currently offers. The criteria are revised annually https://aws.amazon.com/jp/blogs/psa/2024-japan-aws-all-certifications-engineers/ and announced in advance on the AWS Japan APN Blog , where selected individuals are also recognized. In 2024, only 1,222 individuals were officially recognized as "AWS All Certifications Engineers." According to the official article, "earning and maintaining all AWS certifications" demonstrates a solid understanding of AWS technologies and the ability to offer customers reliable and up-to-date technical guidance. While there are many companies offering cloud services—like Azure and GCP—AWS stands out as the industry standard. That's thanks to its sheer volume and quality of services, unmatched pace of updates, and the flexibility that comes from its leading market share. With the growing spotlight on AI, the importance of cloud technology is also rising. Some people might think, "Cloud or AI? That has nothing to do with me." But just like how using a computer has become second nature in most jobs, it won't be long before using AI in everyday life becomes just as common. The cloud provides easy access to both AI models and the computing power required to run them, making cloud technology essential to staying current in today's landscape. So, why is getting certified important when learning AWS and the cloud? That's exactly what I'll explain next. 2. Current Status: Where I Stand Unfortunately, having a certification doesn't necessarily make a big difference in how well you can use the cloud. To give an example, let's treat "cloud" like learning English. Say you studied hard for the TOEIC and got a high score in hopes of using English more effectively. But do you think that alone means you've really improved your English? No matter how good you are at test strategies, or how many words and grammar rules you memorize, it doesn't mean much if you can't actually use English when it counts. That said, it's definitely wrong to say TOEIC isn't helpful for improving your English skills. If it had no value, there's no way so many universities and companies would use TOEIC scores as a benchmark. TOEIC is a test that quantifies business English skills, which is why the score is recognized as a reflection of ability, not just a number. In the same way, having all 12 AWS certifications sets a clear benchmark in the cloud field. It turns abstract knowledge into something visible and concrete in the form of a qualification. Here's a breakdown of the benefits that come from this kind of visualization: Clear goal setting: since the certifications follow a roadmap provided by AWS, you can plan your learning step by step. Motivation: setting an exam date gives you a clear deadline, which helps create an environment where you can stay focused and motivated. Knowledge assurance: you'll gain and confirm the minimum level of knowledge needed to pass the exam. Review: even for those already familiar with the cloud, it's a good opportunity to review and check what's required for certification. Discovery: because the exams evolve with updates, they give you chances to learn about areas you might not normally encounter. Even if you switch the wording to another language, doesn't the content still come across naturally and make sense? In the end, it's not just about getting certified to boost your cloud skills, or getting certified because you want to work with the cloud. What really matters is the value in the learning process itself. The future of AWS certification Next, I'd like to dig into something I felt over the past year or so while preparing for AWS certification: "Where is AWS certification headed from here?" :::message Just to be clear, **this is purely my own personal speculation without any official backing; nothing from AWS itself. ** Please keep that in mind if you quote this. ::: When I first started studying for AWS certification back in 2022, ChatGPT was taking off, and interest in AI was growing rapidly. In response, AWS began rolling out more and more AI-focused services, and from 2024, they made some big changes to their certification structure. In April 2024, three existing Specialty certifications were discontinued: AWS Certified Data Analytics – Specialty (DAS) AWS Certified Database – Specialty (DBS) AWS Certified: SAP on AWS – Specialty (PAS) To replace DAS and DBS, a new certification was introduced in March 2024: AWS Certified Data Engineer – Associate (DEA) Later, in October 2024, AWS introduced two more certifications to reflect the roadmap for new AI services like Amazon Q and Amazon Bedrock, along with enhancements to existing services like Amazon Sagemaker: AWS Certified AI Practitioner (AIF) AWS Certified Machine Learning Engineer – Associate (MLA) This was a major shake-up, and honestly, it caused some headaches even for individual learners like me. The content I had been studying was significantly updated, so I had to completely rethink my exam schedule. It's certain that AWS certifications will continue to evolve, especially with AI leading the way as a major tech trend. While this is purely speculation, the certification that seems most likely to change is: AWS Certified Machine Learning Engineer – Specialty (MLS) The MLS was last updated in July 2022, so its content is already outdated compared to the AIF and MLA. It may simply be updated as a Specialty-level certification, but there's a strong chance it will be restructured into a new Professional qualification. Why? Because the current certification path is organized into three tiers: Practitioner, Associate, and Professional. ^1 In the same way, after AIF and MLA qualifications, a Professional-level certification is likely to follow. Whether a specialty certification will be upgraded to a professional level is ultimately up to AWS. But if that does happen, we'll likely need to anticipate a higher-level DEA certification as well. (Tentative) AWS Certified Machine Learning Engineer – Professional (MLP) (Tentative) AWS Certified Data Engineer – Professional (DEP) This is a logical prediction, but it comes with its own problems. AWS seems to uphold a symbolic 12-certification crown structure, so adding two more would break that and push the total beyond 13. One way to avoid this is to reduce the number of existing specialties—especially those that have become unclear—as new certifications are added. (For example) AWS Certified Security – Specialty (SCS) AWS Certified Advanced Networking – Specialty (ANS) Unlike some of the other specialty certifications that have already been retired, SCS and ANS are built around deeper, professional-level knowledge. Over 60% of the content overlaps with the Professional-level certifications. SCS focuses on organization-wide security, while ANS emphasizes networking with on-premises environments. That said, there are some current shortcomings that can't be ignored. SCS hasn't been updated to reflect developments in AI, so it doesn't cover AI-related security topics. With AI evolving so quickly, security and compliance around AI are becoming increasingly critical. So the question now is whether to add AI content into SCS, or to spread it across each professional-level certification. I think the second option is more likely, since many specialty certifications have already been merged or discontinued to align with the AI trend. In the case of ANS, it's in a similar position to SCS. Even though networking can support AI, within AWS itself, there's not a big difference in capability. It is true that Azure is required for OpenAI, GCP for Gemini, and a multi-cloud setup is necessary to use AI services provided by other cloud vendors. However, since AWS tends to be less proactive in supporting non-AWS products, there haven't been any updates to multi-cloud-related certifications so far. On the other hand, due to the growing anti-cloud sentiment, hybrid cloud is gaining attention, so the ANS certification system is likely to remain. In any case, reducing the number of certifications helps maintain the 12-certification status, so that's one possible approach. Another is consolidating roles, such as DevOps Engineer, instead of introducing new professional-level certifications. (Tentative) AWS Certified MLOps Engineer – Professional (MOP) AWS describes MLOps as "an ML culture and practice that unifies ML application development (Dev) with ML system deployment and operations (Ops)." ^2 This refers to the entire process involved in machine learning. By going through the data engineering and data analysis handled in DEA, you can make use of the entire machine learning flow used in AIF, MLA, and MLS. So if you were to choose just one area to develop as a new professional skill, I believe this would be a practical and effective path. Question Types of AWS Certifications It's not just the types of certifications that are changing. There are also updates to the exam formats. Since the SOA lab exam was discontinued, the remaining exams have been evaluated solely through multiple-choice questions. While the advantage is that results can be measured objectively and quantitatively, it's also true that this format sometimes doesn't reflect hands-on implementation skills. AWS seems aware of this, and they've introduced a new question format starting with the AIF and MLA exams. According to the AIF exam guide , the following types of questions may appear. Ordering: has a list of 3–5 responses to complete a specified task. To earn points for the question, you must select the correct answers and place them in the correct order. Matching: has a list of responses to match with a list of 3–7 prompts. To get points, you must match all the pairs correctly Case study: has one scenario with two or more related questions. The scenario is the same for each question in the case study. Each question in the case study will be evaluated separately and points are awarded for each correct answer. These three types didn't appear very frequently in my exam, but just as the guide describes, they were included. The difficulty level was similar to that of regular multiple-choice questions. Due to AWS exam confidentiality, I can't share exact question formats, but here's how I'd describe the types based on my experience: For sorting and matching, you can't rely on option similarity to guess the right answer. You really need to know the required steps and how the given terms or descriptions logically connect. As for case studies, while the format is essentially multiple choice, they bundle several questions into one shared scenario. This format allows you to approach the case from multiple angles, and it also helps avoid situations where you're tested more on reading comprehension than on applying your actual knowledge. In the real world, we don't just answer one question at a time. We usually simulate each case and think through it as a whole. That's why I think the case study format is a great approach for test takers. When it comes to AWS certifications, the question formats will likely continue to evolve. For example, like the hands-on labs in the SOA exam, we can expect more questions along the lines of, "Can you actually implement this?" These kinds of changes won't happen just once: they'll gradually be introduced into other certifications as well. So, if you're preparing for an AWS exam, it's important to stay up to date and be ready! 3. Mindset: Preparing for the Challenge This is something I often hear from people around me, regardless of their job title: "I don't work in anything related to AWS, but will this actually be useful if I study it?" "If I want to get AWS certified, where should I start?" "What are you using to study?" I'm certified as a cloud engineer, which means I already needed to have the knowledge to work in the cloud. Because I use it in my actual job, I interact with cloud services far more often than most people. That's why getting certified doesn't automatically mean you'll be ready to work in cloud-related roles right away. If you haven't used the cloud before, it's rare to suddenly become able to use it just because you passed an exam. A certification is kind of like a coupon. Even if you have a coupon that gives you 10% off at a gas station on purchases over 10,000 yen, there are lots of reasons you might not be able to use it: you don't have a car, the station is too far away, or you don't have enough money to hit the discount threshold. Seen this way, the conditions for using the coupon are pretty clear: You or someone close to you owns a car or is planning to. The gas station that accepts the coupon is within reach. You're in a position to make use of the discount. So before jumping in, check whether you actually have a reason to want that coupon. In other words, "Are you in a position to take action and make use of the qualification?" Even if you get a coupon, it's not like a car will magically appear, or a gas station will pop up right in front of your house, or the money to use it will suddenly fall into your lap—those things just don't happen in real life, right? The same goes for the cloud and AWS. For those who feel the cloud doesn't really apply to their work, examples might include: Business professionals outside of IT Developers who don't specialize in infrastructure Infrastructure specialists focused solely on on-premises systems Now, what would you recommend to someone who says they can't afford to buy a car? With car leasing or subscriptions, as long as you can pay the monthly fee, you can still drive a car. That's exactly what the cloud is in IT. I believe that "borrowing technology" is the essence of the cloud. If learning the tech is too expensive, you can just borrow it. Of course, the specifics vary by field, but I truly believe that just understanding this concept can completely change how you view technology. If someone says, "The gas station's too far," then that's a perfectly valid reason. There's no need to force yourself to go. But what if the station is close enough to swing by on your daily commute? For developers, the cloud isn't really that far away. In fact, just shifting your perspective a little might reveal a whole world of possibilities right next to you. Finally, no matter how good a coupon you have, it's no use if you don't use it. Even if you already own a car and a gas station opens up right outside your home, you still won't be able to use the coupon if you always stick to your usual station. People may have all kinds of reasons: maybe they can't pay the 10,000 yen up front, they already have a different gas card, or they're unsure about the store. But the undeniable fact is that infrastructure professionals are more naturally positioned to get into the cloud than anyone else. If you’ve only worked in on-prem environments, the cloud, whether IaaS (Infrastructure as a Service) or PaaS (Platform as a Service), might feel unfamiliar. Still, the fundamentals of the cloud are built on infrastructure knowledge. That's why, compared to people in business or development roles, it's actually much more accessible. So instead of saying, "I don't have the ability to work with the cloud, "how about saying, ‘Let's build the ability to work with the cloud’"? I started my career in development, but thanks to the cloud knowledge I gained through self-study, I was trusted with cloud-related tasks as well. After that, I earned certifications and was able to transition into a cloud-focused role. Honestly, if I had only stuck to what I was already doing or aiming to do, I probably wouldn't have made it this far. Getting all 12 AWS certifications really feels like it's opened up more doors for me. Since joining KTC, I'd say about 50% of the knowledge I gained from certifications has been directly applicable in practice. As for the other 50%, I'm continuing to work on ways to put it to good use. KTC has set "AI First" as its key goal for this year, and I plan to contribute actively to our AI initiatives. If you're interested in KTC's AI First direction, I highly recommend checking out the article written by our Vice President, Kageyama. https://blog.kinto-technologies.com/posts/2024-12-25-LookBack2024/ The official AWS認定パス are also recommended. Please take a look for reference! 4.Strategy: How I Passed the Exam There are plenty of people out there who recommend different study methods, so instead of repeating the same advice, I'd like to talk about strategies for efficiently tackling AWS certification from a different angle. The Straightforward Approach Studying seriously is actually very simple. The key is to start from scratch and work through the content outlined in the previously mentioned AIF exam guide This method is ideal for those who don't have basic knowledge and want to learn properly without rushing, or for anyone who prefers to take their time with exam prep. The process can be broken down into five stages: Information gathering: look through sources like search engines, social media, YouTube, blogs, etc. to find your preferred sources. Choose your source: from the available options, pick the one that fits you best. Official AWS documents: The documentation provided by AWS is always up to date, highly reliable, and of high quality. Even when I'm using other learning methods, I always go back to check the official docs. The AWS Training Center, which offers some content for free, is also a great help—definitely take advantage of it. I haven't used any paid services, but from what I've seen, they have a similar effect to that of external learning website introduced below. YouTube: offers the largest collection of free content, but the quality and accuracy can vary greatly depending on the uploader, and the information isn't always up to date. That said, if you're comfortable learning through video and audio and don't have language barriers, these downsides become less of an issue. It's also great that you can just give it a try and stop anytime you like. Books: If you like analog-style studying, books are a solid choice. Their strength lies in offering focused content with a certain level of quality assurance. The advantage is that you can get a general idea of the contents before buying, and everything you need is typically packed into one volume. However, be aware that books tend to lag behind on updates, so they're not always ideal for something like AWS certification, where things change quickly. Unless you're planning to dive into study right away and take the exam before the certification updates, it may be better to hold off. External learning websites: let's look at paid learning resources, such as external website like Udemy. These are paid resources, but they offer high-quality content and are generally updated faster than YouTube. In short, they combine the strengths of both YouTube and books, which is why I personally use them often. One thing to be careful about AWS certification is that many topics overlap across different certifications, so be sure to check the table of contents carefully before purchasing a course. Start studying: study time varies depending on the exam, but in most cases, planning for about 2 to 3 months worked best for me. Verification: use AWS official mock exams to assess your current level. Exam: be sure to schedule your exam in advance. You can take it either online or at a testing center offline, but I recommend offline for a more stable test environment. If any problems arise during an offline exam, the test center staff can help resolve them, and as a bonus, offline exam results tend to arrive faster too. Quick-Attack Method Some people think: "I already know AWS, so I don't need to start from scratch. I just want to focus on the parts I'm unfamiliar with." "I'm fine with learning as I go. Just want to get certified as fast as possible!" I've approached a few exams this way myself, so let me share the most cost-effective strategy I've used. :::message Generative AI: I used Perplexity, but any generative AI will do! The prompts I introduce here are just examples, so feel free to use whatever prompt that suits you. ::: Keyword Summary Use AI to summarize service content and extract only the key points you need to know. :::details Answer Here is a summary centered on CloudWatch, AWS's monitoring service, along with a comparison to related services such as CloudTrail, Config, and WAF. I'll also briefly mention a few other monitoring tools not covered in detail here. About CloudWatch Overview Amazon CloudWatch is a service for monitoring AWS resources and applications in real time. It provides the following key features: Metrics collection: collects resource data such as CPU usage, disk I/O, and network traffic. Log monitoring: collects and analyzes application and system logs. Alarm settings: sends notifications or triggers automated actions based on defined thresholds. Event management: automates responses to specific events (e.g., sending notifications when an instance is launched). Major use cases Monitoring resource performance Detecting anomalies and sending alert notifications Visualizing the overall system operation status Comparison between CloudWatch and its related services Service name Main objective Features Major use cases CloudWatch Metrics & log monitoring Monitor AWS resources and applications in real time. Receive near real-time notifications. Resource monitoring, performance optimization, and anomaly detection. CloudTrail Recording operation history Record who did what and when (API calls). For compliance and security audits. Security audits and detecting unauthorized operations. AWS Config Tracking and evaluating configuration changes Record changes to resource configurations and detect rule violations. Save the setting state as a snapshot. Configuration management, ensuring compliance, and troubleshooting. AWS WAF Web application layer security Protect web applications from threats like SQL injections and DDoS attacks. Can be integrated with CloudFront. Strengthening security and blocking attacks Basis of comparison Aims and scope: CloudWatch monitors the overall system status. CloudTrail specializes in operation history. Config is used for changing settings and evaluating rules. WAF is dedicated to security measures. Real-time capabilities: CloudWatch can send notifications in near real time. CloudTrail may have some delay in reporting. Overlapping use cases: Data recorded by udTrail and Config can also be sent to CloudWatch Logs for centralized management. Comparisons among external monitoring tools applicable to AWS There are several monitoring tools available outside of AWS, including: Tool name Features Advantages Disadvantages Zabbix Open source Highly customizable Available for free Supports a wide range of environments Complex to set up and operate Mackerel Japanese SaaS-based tool Easy to set up Centralized management across multiple environments Limited AWS-specific features Site24x7 Multi-cloud support Supports hybrid cloud environments More difficult to configure in detail compared to AWS-specific tools Selection criteria If you're using AWS only: CloudWatch is the best fit. For multi-cloud setups: Site24x7 or Mackerel work well. If you need deep customization: Zabbix is the way to go. Keys to successful preparation for AWS certification exams Clearly understand the role and use cases of each monitoring service. Be familiar with integration examples involving CloudWatch and other services like CloudTrail, Config, and WAF. Topics such as pay-as-you-go pricing and real-time monitoring may also appear on the exam. If you have any questions, feel free to ask! ::: Practice Question Generation Once you grasp the concepts of key terms, try creating simple questions to test your understanding. :::details Answer Question 1: You want to receive a notification when an EC2 instance's CPU utilization exceeds a specific thresholds. Which AWS service should you use to achieve this purpose? Options: • A. AWS CloudTrail • B. Amazon CloudWatch • C. AWS Config • D. AWS WAF Question 2: Your security team wants to track API calls made to AWS resources over the past three months to investigate a specific user's activity. Which service is best suited for this purpose? Options: • A. Amazon CloudWatch • B. AWS CloudTrail • C. AWS Config • D. Amazon Athena Question 3: A company wants to track configuration changes to its AWS resources and detect compliance violations. Which service is best suited for this purpose? Options: • A. AWS CloudTrail • B. Amazon CloudWatch Logs • C. AWS Config • D. AWS WAF Question 4: You want to protect a web application from SQL injection and cross-site scripting (XSS) attacks. Which AWS service is best suited for this purpose? Options: • A. Amazon CloudWatch Logs • B. AWS WAF • C. AWS Shield • D. AWS Config ::: I only explained the two main ways I use as quick-attack methods, but it's a generative AI learning approach that anyone can customize endlessly. Besides the methods I introduced, I also often use it to throw in dozens of detailed questions as reminders. If you give the AI some reference material, it'll generate richer keyword summaries and mock questions. I used the quick-attack method for about 70% of my AIF and MLA prep, and passed in 1 or 2 rounds, so I can guarantee you that it works! 5. 1st-art: Every Start is Art After earning all 12 AWS certifications, I tweeted some thoughts about the journey. You may have noticed that I hid a little trick in both the title and the body. Go back to the beginning, and you'll see it right away. Starting Point: Overview -> 5. 1st-art: Every Start is Art The reason I added this quirky twist is because my 12-cert, 1-year-and-4-month journey became a single picture after all the pieces came together, one I could only complete because I started (art) this canvas. You never know what the final image will be when you first pick up the brush. When I was in elementary school, I was asked to draw my future, and I drew myself as a firefighter. In junior high school, it was a novelist. Now, I work as a cloud engineer, which is completely different from either of those. But does that mean the pictures I drew as a child had no meaning? I believe drawing them had meaning because I was facing my dreams. Now, I've completed a picture called "12 AWS Certifications." I intend to keep drawing new pictures as I move forward. This article I wrote on the Tech Blog is one picture, and I think my work at KTC can become another in the series. Thank you very much for reading!
アバター
Introduction Hello! I am Yamada, and I develop and operate in-house tools in the Platform Engineering Team of KINTO Technologies' (KTC) Platform Group. If you want to know more about the CMDB developed by the Platform Engineering team, please check out the article below! https://blog.kinto-technologies.com/posts/2023-12-14-CMDB/ This time, I would like to talk about how we implemented a CMDB data search function and CSV output function in a chatbot, one of the CMDB functions, using generative AI and Text-to-SQL . The CMDB chatbot allows you to ask questions about how to use the CMDB or about the data managed in the CMDB. Questions about the CMDB data had been originally answered using a RAG mechanism using ChromaDB, but we moved to a Text-to-SQL implementation for the following reasons: Advantages of Text-to-SQL over RAG Data accuracy and real-time availability The latest data can be retrieved in real time directly from the CMDB database. No additional processing is required to update data. System simplification No infrastructure for vector DB or embedding processing is required (ChromaDB and additional batches for embedded data are no longer required). For these reasons, we decided that Text-to-SQL is more suitable for a system that handles structured data such as CMDB. What Is Text-to-SQL? Text-to-SQL is a technology for converting natural language queries into SQL queries. This allows even users without knowledge of SQL to easily extract the necessary information from the database. This makes it possible to retrieve data such as products, domains, teams, users, and vulnerability information including ECR and VMDR managed in the CMDB database from natural language queries. The following are some examples of matters that could be utilized within KTC: Retrieving a list of domains that have not been properly managed (domains not linked to products in the CMDB) Retrieving Atlassian IDs of all employees This is because the MSP (Managed Service Provider) team creates tickets for requests such as addressing PC vulnerabilities, by mentioning (tagging) the relevant individuals. Aggregation of the number of vulnerabilities detected in resources related to the products for which each group is responsible Extraction of products for which the AWS resource start/stop schedule has not been set. Previously, when a request to extract such data came to the Platform Engineering team, a person in charge would run a SQL query directly from the CMDB database to extract and process the data, then hand it over to the requester. When requesters become able to extract data using Text-to-SQL in the CMDB chatbot, they will be able to easily extract data without having to go through the trouble of asking a person in charge, as shown in the figure below: Text-to-SQL is a convenient feature, but you must be aware of the risk of insecure SQL generation. While the following figure illustrates an extreme case, since SQL is generated from natural language, there is a risk of unintentionally generating SQL statements that update or delete data or modify table structures. So, you need to avoid generating unsafe SQL by the following methods: Connecting to a Read Only DB endpoint Set DB users to Read Only permissions Carrying out a validation check to ensure that commands other than SELECT are not executed in application implementation System Configuration Here is the architecture of the CMDB. Resources that are not relevant to this article have been excluded. As I explained at the beginning, we had originally used ChromaDB as a vector DB, obtained information on how to use the CMDB from Confluence (implemented with LlamaIndex), and retrieved CMDB data from a database (implemented with Spring AI), then entered both into ChromaDB. This time, we have migrated answers to questions about CMDB data from the RAG feature in Spring AI + ChromaDB to a feature using Text-to-SQL. Text-to-SQL Implementation From here on, I would like to explain the implementation while showing you the actual code. CMDB Data Search Function Retrieving Schema Information First, retrieve the schema information required to generate SQL in LLM. The less schema information there is, the higher the accuracy, so we have adopted a method of specifying only the necessary tables. Since the comments for table columns are important as judgment criteria when the LLM generates SQL statements, all of them need to be added beforehand. def fetch_db_schema(): cmdb_tables=['table1', 'table2', ...] cmdb_tables_str = ', '.join([f"'{table}'" for table in cmdb_tables]) query = f""" SELECT t.TABLE_SCHEMA, t.TABLE_NAME, t.TABLE_COMMENT, c.COLUMN_NAME, c.DATA_TYPE, c.COLUMN_KEY, c.COLUMN_COMMENT FROM information_schema.COLUMNS c INNER JOIN information_schema.TABLES t ON c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME WHERE t.TABLE_SCHEMA = 'cmdb' AND t.TABLE_NAME IN ({cmdb_tables_str}) ORDER BY t.TABLE_SCHEMA, t.TABLE_NAME, c.COLUMN_NAME """ connection = get_db_connection() try: cursor = connection.cursor() cursor.execute(query) return cursor.fetchall() finally: cursor.close() connection.close() Example of retrieved results TABLE_SCHEMA TABLE_NAME TABLE_COMMENT COLUMN_NAME DATA_TYPE COLUMN_KEY COLUMN_COMMENT cmdb product Product table product_id bigint PRI Product ID cmdb product Product table product_name varchar Product name cmdb product Product table group_id varchar Product's responsible department (group) ID cmdb product Product table delete_flag bit Logical deletion flag 1=deleted, 0=not deleted Formatting the retrieved schema information into text for the prompt to be passed to the LLM def format_schema(schema_data): schema_str = '' for row in schema_data: schema_str += f"Schema: {row[0]}, Table Name: {row[1]}, Table Comment: {row[2]}, Column Name: {row[3]}, Data Type: {row[4]}, Primary Key: {'yes' if row[5] == 'PRI' else 'no'}, Column Comment: {row[6]}\n" return schema_str Convert each column into the following text and pass the schema information to LLM. Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: product_id, Data Type: bigint, Primary Key: PRI, Column Comment: プロダクトID Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: product_name, Data Type: varchar, Primary Key: no, Column Comment: プロダクト名 Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: group_id, Data Type: varchar, Primary Key: no, Column Comment: プロダクトの担当部署(グループ)ID Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: delete_flag, Data Type: bit, Primary Key: no, Column Comment: 論理削除フラグ 1=削除, 0=未削除 Generating SQL queries from questions and schema information from the CMDB chatbot, using LLM This is the Text-to-SQL portion, where SQL queries are generated from natural language. Based on the questions and schema information, we specify various conditions in the prompt and have LLM generate SQL. For example, the following conditions can be specified: Generate valid queries for MySQL:8.0 Use fuzzy search for condition expressions other than ID Basically, exclude logically deleted data from search Do not generate anything other than SQL statements Addition of context information Convert questions in the forms of "... of KTC" and "... of CMDB" into "All...” Convert questions about region to those about AWS region Convert Tokyo region to ap-northeast-1 The instruction "Do not generate anything other than SQL statements" is particularly important. When this was not conveyed properly, responses often ended up including unnecessary text such as: "Based on the provided information, the following SQL has been generated: SELECT~” So, a prompt is needed that ensures SQL statements only in the form of "SELECT~" are generated without generating unnecessary text, explanations, or markdown formatting. def generate_sql(schema_str, query): prompt = f""" Generate a SQL query based on the given MySQL database schema, system contexts, and question. Follow these rules strictly: 1. Use MySQL 8.0 syntax. 2. Use `schema_name.table_name` format for all table references. 3. For WHERE clauses: - Primarily use name fields for conditions, not ID fields - Primarily use name fields for conditions, not ID fields - Use LIKE '%value%' for non-ID fields (fuzzy search) - Use exact matching for ID fields - Use exact matching for ID fields - Include "delete_flag = 0" for normal searches - Use "delete_flag = 1" only when the question specifically asks for "deleted" items CRITICAL INSTRUCTIONS: - Output MUST contain ONLY valid SQL query. - DO NOT include any explanations, comments, or additional text. - DO NOT use markdown formatting. - DO NOT generate invalid SQL query. - DO NOT generate invalid SQL query. Process: 1. Carefully review and understand the schema. 2. Generate the SQL query using ONLY existing tables and columns. 3. Double-check query against schema for validity. System Contexts: - Company: KINTO Technologies Corporation (KTC) - System: Configuration Management Database (CMDB) - Regions: AWS Regions (e.g., Tokyo region = ap-northeast-1) Interpretation Rules: - "KTC" or "CMDB" in query:Refer to all information in the database Examples: " Employees in KTC" -> "All users" "KTC's products" -> "All products" "Domains on CMDB" -> "All domains" - Region mentions:Interpret as AWS Regions Example: " ECR repositories in Tokyo region" -> "ECR repositories in ap-northeast-1" Database Schema: {schema_str} Question: {query} """ return llm.complete(prompt).text.strip() Perform validation checks on SQL generated by LLM and Text-to-SQL to allow only SELECT statements To prevent the risk of unsafe SQL generation, we connect to a read-only DB endpoint, but check whether any SQL other than queries has been generated. Execute the SQL query generated by LLM Generate an answer in LLM based on the SQL query generated by LLM, the results of SQL execution, and the question. Pass the last executed SQL query, the results of SQL execution, and the question to LLM to generate an answer. Unlike the Text-to-SQL prompt, which includes many instructions, this prompt includes fewer instructions but still specifies not to include the DB schema configuration or physical names in the answer. def generate_answer(executed_sql, sql_result, query): prompt = f""" Generate an answer based on the provided executed SQL, its result, and the question. Ensure the answer does not include information about the database schema or the column names. Respond in the same language as the question. Executed SQL: {executed_sql} SQL Result: {sql_result} Question: {query} """ return llm.stream_complete(prompt) Execution Result Question: Tell me the product of the platform group. Based on this question and the database schema, LLM will generate SQL as follows: Execution Result Question: Tell me the product of the platform group. Based on this question and the database schema, LLM will generate SQL as follows: SELECT product_name FROM product WHERE group_name LIKE '%プラットフォーム%' AND delete_flag = 0; This information and the results of the SQL execution are then passed to the LLM to generate an answer. This is the vulnerability information retrieved from the ECR scan results. Generating a JSON object containing an SQL query using LLM based on the output request and schema information from the CMDB chatbot Based on the natural language describing the CMDB data to be output as CSV, we will use LLM to generate a JSON object containing the column names to be output and the SQL statement to search for them. The conditions are basically the same as those for the CMDB data search function prompt, but they emphasize the instructions for generating a JSON object according to the template. Here is the prompt: prompt = f""" Generate a SQL query and column names based on the given MySQL database schema, system contexts and question. Follow these rules strictly: 1. Use MySQL 8.0 syntax. 2. Use `schema_name.table_name` format for all table references. 3. For WHERE clauses: - Primarily use name fields for conditions, not ID fields - Use LIKE '%value%' for non-ID fields (fuzzy search) - Use exact matching for ID fields - Include "delete_flag = 0" for normal searches - Use "delete_flag = 1" only when the question specifically asks for "deleted" items Process: 1. Carefully review and understand the schema. 2. Generate the SQL query using ONLY existing tables and columns. 3. Extract the column names from the query. 4. Double-check query against schema for validity. System Contexts: - Company: KINTO Technologies Corporation (KTC) - System: Configuration Management Database (CMDB) - Regions: AWS Regions (e.g., Tokyo region = ap-northeast-1) Interpretation Rules: - "KTC" or "CMDB" in query: Refer to all information in the database Examples: "Employees in KTC" -> "All users" "KTC's products" -> "All products" "Domains on CMDB" -> "All domains" - Region mentions: Interpret as AWS Regions Example: "ECR repositories in Tokyo region" -> "ECR repositories in ap-northeast-1" Output Format: Respond ONLY with a JSON object containing the SQL query and column names: {{ "sql_query": "SELECT t.column1, t.column2, t.column3 FROM schema_name.table_name t WHERE condition;", "column_names": ["column1", "column2", "column3"] }} CRITICAL INSTRUCTIONS: - Output MUST contain ONLY the JSON object specified above. - DO NOT include any explanations, comments, or additional text. - DO NOT use markdown formatting. Ensure: - "sql_query" contains only valid SQL syntax. - "column_names" array exactly matches the columns in the SQL query. Database Schema: {schema_str} Question: {query} """ Performing validation checks on SQL generated by LLM and Text-to-SQL to allow only SELECT statements . Execute SQL queries generated by LLM This is the same as with the CMDB data search function. Outputting a CSV file using the execution results Use the SQL results and column names generated by LLM to output a CSV file. column_names = response_json["column_names"] # LLMで生成したJSONオブジェクトからカラム名を取得 sql_result = execute_sql(response_json["sql_query"]) # LLMで生成したSQLの実行結果 csv_file_name = "output.csv" with open(csv_file_name, mode="w", newline="", encoding="utf-8-sig") as file: writer = csv.writer(file) writer.writerow(column_names) writer.writerows(sql_result) return FileResponse( csv_file_name, media_type="text/csv", headers={"Content-Disposition": 'attachment; filename="output.csv"'} ) Execution Result By specifying the content and columns you want to output and posting it in the chat, you can now output a CSV file as shown below. First, LLM creates a JSON object like the one below from the chat messages and the database schema. { "sql_query": "SELECT service_name, group_name, repo_name, region, critical, high, total FROM ecr_scan_report WHERE delete_flag = 0;", "column_names": ["プロダクト名", "部署名", "リポジトリ名", "リージョン名", "critical", "high", "total"] } The following is the process of executing SQL based on the above information and outputting a CSV file: Product name Division name Repository name Region name critical high total CMDB Platform ××××× ap-northeast-1 1 2 3 CMDB Platform ××××× ap-northeast-1 1 1 2 CMDB Platform ××××× ap-northeast-1 1 1 2 Next Steps So far, we have utilized generative AI and Text-to-SQL to implement a CMDB data search function and a CSV data output function. However, there is still room for improvement, as outlined below: The CMDB data search function calls LLM twice, which makes it slow. Weak at answering complex and ambiguous questions Natural language is inherently ambiguous, allowing multiple interpretations of a question. Accurate understanding of schema Schema information is complex, and it is difficult to make the system understand the column relationship between the tables. Addition of context information Currently, the first prompt adds minimal context information. In anticipation of the future, when more context information will be added, we are considering methods to transform the question content from a large amount of context information into an appropriate question before the first LLM call. We are also exploring the possibility of fine-tuning with a dataset that includes KTC-specific context information for additional training. Implementing query routing Since the APIs called from the front end are divided into two—one for CMDB data search and one for CSV output—we want to unify them into a single API and improve it so that it can determine which operation to call based on the content of the question. Conclusion This time, I discussed the CMDB data search function and CSV output function using generative AI and Text-to-SQL. It's difficult to keep up with new generative AI-related technologies as they continue to emerge every day. But as AI will be more involved in application development than ever before in the future, I would like to actively utilize any technologies that interest me or that seem applicable to our company's products.
アバター
Self-Introduction Hi, I'm Tetsu. I joined KTC in March 2025. I worked as an infrastructure engineer handling both on-premises and cloud environments. At KTC, I've joined the team as a platform engineer. I'm a big fan of travel and nature, so I usually head out somewhere far during long holidays. Overview In this article, I’ll walk you through how to update your GitHub Actions workflow to pull public container images—such as JDK, Go, or nginx, from the ECR Public Gallery instead of Docker Hub. Starting April 1, 2025, Docker Hub will tighten the rules on pulling public container images for unauthenticated users. More specifically, unauthenticated users will be limited to 10 image pulls per hour per source IP address. Learn more here . The virtual machines that run GitHub Actions workflows are shared across all users, which means Docker Hub sees only a limited set of source IP addresses. Because of this, the above limits became a bottleneck when building containers with GitHub Actions, so we'll need to find a workaround. Prerequisites At our company, we used GitHub Actions with the following configuration to automate container builds (this is a roughly abstracted configuration). Considering Countermeasures We explored a few ways to deal with the Docker Hub pull limit. Using a Personal Access Token (PAT) to Log In to Docker Hub and Pull You might be thinking, "Why not just authenticate with Docker Hub in the first place?" Fair point. You can generate a Docker Hub PAT and use it in your GitHub Actions workflow with docker login to authenticate. That way, you can get around the pull limit. Just keep in mind, PATs are tied to individual users. Since our team shares GitHub Actions workflows, linking tokens to individual users isn’t ideal from a license management standpoint. Log in to Docker Hub with your Organization Access Token (OAT) and pull It's basically the same method as above, but the key difference is that you're authenticating with a shared token tied to your OAT. To use this shared token, you'll need a Docker Desktop license for either the Team or Business plan. Migrating to GitHub Container Registry (GHCR) Here, I'll cover how to pull container images from GitHub Container Registry (GHCR), which is provided by GitHub. By using {{ secrets.GITHUB_TOKEN }} in your GitHub Actions workflow, you can authenticate and pull container images. That said, searching for images can be a bit tricky, especially if you're trying to compare versions with what's available on Docker Hub. Transition to ECR Public Gallery Here's how you can pull container images from the ECR Public Gallery provided by AWS. Restrictions differ depending on whether you use IAM to authenticate with ECR Public Gallery, but it's basically free to use. For unauthenticated users, the following limits apply per source IP address when using the ECR Public Gallery: 1 pull per second 500GB of pulls per month On the other hand, authenticated users are subject to the following restrictions on an account-by-account basis. 10 pulls per second Transfers over 5TB/month are charged at $0.09 per GB (the first 5TB is free) You can find more details in the official documentation below. https://docs.aws.amazon.com/ja_jp/AmazonECR/latest/public/public-service-quotas.html https://aws.amazon.com/jp/ecr/pricing/ If you are not using an AWS account, data transferred from a public repository is restricted based on the source IP. The ECR Public Gallery includes official Docker images, which are equivalent to those on Docker Hub. That makes it easier to use in practice and simplifies the migration process. Case Comparison I reviewed the proposals above and evaluated them based on QCD. Here's the comparison table: Proposals Quality Cost Delivery Log in to Docker Hub using PAT × - Relies on personal tokens, which isn't ideal for organizations - No change in convenience from the current setup 〇 No additional cost 〇 Easy to implement with less workload Log in to Docker Hub using OAT ○ No change from the current setup × License costs increase depending on the number of users × License changes take time to process Transition to GHCR △ Hard to find equivalent images currently used on Docker Hub 〇 No additional cost 〇 Easy to implement with less workload Transition to ECR Public Gallery 〇 Easy to find matching currently used on Docker Hub 〇 No additional cost 〇 Easy to implement with less workload One advantage of using PAT or OAT is that it keeps things as convenient as they are now. GHCR can be easily set up using GitHub's {{ secrets.GITHUB_TOKEN }}, but it's harder to search for container images compared to ECR Public Gallery. ECR Public Gallery requires some IAM policy changes, but since they're minor, the extra workload is minimal. Based on these points, we decided to go with the plan of "migrating to ECR Public Gallery," as it's low-workload, cost-free, and offers good usability. Note: Depending on your environment or organization, this option may not always be the best fit. Settings for Migrating to ECR Public Gallery To migrate, you'll need to update the container image source, set up the YAML file for the GitHub Actions workflow, and configure AWS accordingly. Diagram Fixing the Container Image Source Searching for Container Images In most cases, you probably define where to pull container images from in files like your Dockerfile or docker-compose.yml. This time, we'll walk through how to migrate the source of a JDK container image from Docker Hub to ECR Public Gallery using a Dockerfile. Let's say your Dockerfile includes a FROM clause like this: FROM eclipse-temurin:17.0.12_7-jdk-alpine Search here to check if the image is available on ECR Public Gallery. In this case, search for the official Docker Hub image like eclipse-temurin before the ":" and pick the one labeled "by Docker." Select "image tags" to display the image list. Type the tag of the official Docker Hub image (in this case, 17.0.12_7-jdk-alpine ) into the image tags search field to find the image you're looking for. Then copy the "Image URI". Fixing the Container Image Source Paste the modified container image URI into the FROM line. In this case, the updated URI looks like the example below (note the addition of public.ecr.aws/docker/library/ compared to the original). FROM public.ecr.aws/docker/library/eclipse-temurin:17.0.12_7-jdk-alpine With this change, your setup will now pull images from ECR Public Gallery. AWS Configuration To pull from ECR Public Gallery while authenticated, you'll need to set up an IAM role and policy. IAM Role You can follow the steps in GitHub's official documentation for this. https://docs.github.com/ja/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services Start by setting up the identity provider, then create the IAM role. IAM Policy Create an IAM policy that allows the action to pull from the ECR Public Gallery. I referred to the following docs for this: https://docs.aws.amazon.com/ja_jp/AmazonECR/latest/public/docker-pull-ecr-image.html { "Version": "2012-10-17", "Statement": [ { "Sid": "GetAuthorizationToken", "Effect": "Allow", "Action": [ "ecr-public:GetAuthorizationToken", "sts:GetServiceBearerToken" ], "Resource": "*" } ] } Attach this IAM policy to the IAM role you created above. Added Login Process to ECR Public Gallery in Github Actions To log in to the ECR Public Gallery with authentication, add a login process to the YAML file that defines the Github Actions workflow. In our setup, we add the following before the Docker Build step. ## ECR Public Galleryへログイン - name: Login to ECR Public Gallery id: login-ecr-public run: | aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws *Since the ECR Public Gallery is located in the us-east-1 region, make sure to explicitly set --region us-east-1 . Conclusion In this article, we walked through how to set up your GitHub Actions workflow to pull public container images (like JDK, Go, nginx, etc.) from the ECR Public Gallery instead of Docker Hub. Hope this helps with your development and daily tasks!
アバター
Introduction Hello, this is Hirata from the Analysis Production Group! As an analyst, I’d like to talk about how to streamline the SQL creation tasks I handle every day. In this article, I will talk about how I used Github Copilot Agent and "Python" to streamline the task of writing complex SQL consisting of hundreds of lines, its trial-and-error process and results, and future improvements. 【Summary】 ✔︎ Preparing table information in advance and having the generative AI create SQL ✔︎ Implementing a system to automatically execute and check the created SQL using Python ✔︎ Having the AI automatically fix errors upon their occurrence to improve work efficiency Background: Daily SQL Creation Tasks and Their Challenges I face the following problems daily: Complicated interactions with the generative AI It was necessary to repeatedly explain table information, data types, date formats, and so on to the generative AI each time, which was a time-consuming task. Creation of massive SQL I have to write hundreds of lines of SQL for tasks such as extracting users for marketing purposes or creating data for analysis, with complex processing logic scattered throughout. Repeated trial-and-error (Loop) The repetitive cycle of copying and executing the generated SQL, and when an error is encountered, I forward the error log to request a correction…this has become a bottleneck. If I fix myself, differences from the latest version created by GitHub Copilot arise, and when I request the next fix, it sometimes reverts to a previous state. Trial and Error! Building an Automated Workflow Using Generative AI and Python I sought to enhance work efficiency by adopting the following process. Overview of the Automation Flow Registration of preliminary information I compile the structure of each table, data types, sample data, sample SQL, and processing tips into respective prompt files. SQL generation using generative AI I give the generative AI a prompt describing the full flow: 'Generate SQL based on table info, save it, and verify execution,' and it automatically produces the SQL file. Execution and checking with python I execute the generated SQL using a Python script. If an error occurs, I feed back the error log and prompt automatic correction. Key Points of the Approach Below are the directory structure and example files I actually built: rules / conversation_rules.prompt.md Basic conversation rules Generate and save SQL based on table information, then execute the Python file to check whether the SQL is correct. Also, describe the rules for SQL creation and the preferred conversation style. tables / .prompt.md Enter table information. By including sample data, have the generative AI judge the characteristics of the data. ## テーブル名 users ## 説明 ユーザー情報のテーブル。 user_idをキーにorderテーブルと紐付け可能 ## sample SQL ```sql select user_id as "顧客ID", name as "顧客名", address as "住所" from users ``` ## sample data | 顧客ID | 顧客名 | 住所 | | ------ | -------- | -------------------------------- | | 1 | 山田太郎 | 東京都新宿区西新宿 | | 2 | 佐藤花子 | 大阪府大阪市中央区難波 | | 3 | 鈴木次郎 | 愛知県名古屋市中区栄 | sql / Have the generated SQL saved as a file. utils / query_checker.py After creating the SQL, execute this Python file to check whether the SQL is correct. Example of Actual GitHub Copilot Agent Response Here’s a simple example to check its actual operation. [in] Write an SQL query to count how many emails were sent this month, grouped by user. [out] First, let’s check the schema information, then create an SQL query to analyze email delivery. We’ll look for table information related to email delivery. "› Searched for files matching /prompts/tables/**, 14 matches" We’ve located schema information related to email delivery history! Let’s review the details and write an SQL query to extract the number of deliveries made this month. Read メール配信履歴.prompt.md Schema information has been confirmed. It appears the data is stored in a table named [メール配信履歴テーブル] (Email Delivery History Table). We’ll now create an SQL query to aggregate the number of email deliveries for this month (April 2025). Key columns in this table include: • email_id: メルマガID • delivery_timestamp: 配信日時 • delivery_status: 配信ステータス Now, let’s create the SQL query to aggregate the number of email deliveries for this month and save it to a file named メール配信集計_今月.sql. This SQL query will: ・ Filter emails delivered between April 1 and April 30, 2025 ・ Include only rows where delivery the status is "Delivered" ・ Calculate the number of deliveries, successes, failures, and total deliveries per user ・ Display overall totals at the end Would you like to validate the query using Python? Let’s run the SQL we just created for verification: python3 query_checker.py メール配信集計_今月.sql SQL validation complete! The query was successfully validated and is executable. The test run showed no syntax errors and confirmed a correct execution plan. Results and Future Prospects Fortunately, the entire process operated smoothly, marking the first step toward automating SQL creation! That said, the process is not perfect. What challenges and improvements should I work on? ● Current Challenges Review workload I tend to miss errors more easily when reviewing AI-generated SQL compared to writing SQL manually. I am still in the trial-and-error stage of developing a verification mechanism and methods that would enable even non-SQL users to effectively review queries, and I hope to improve this going forward. I also hope that advances in generative AI will help address these challenges! Checking whether data has been extracted as intended There are cases where the requirement definitions are incomplete or I fail to verbalize the information in my mind accurately, making it difficult to automatically determine whether the processing aligns with my intent. There is still room for improvement in conveying subtle nuances and intentions. ● The Next Challenges Automation of number check As a first step in sophisticating reviews, I would like to implement a function to check whether the number of extracted items is as I intended. Accumulation of data processing methods that can be called the "secret sauce" I want to keep adding to the prompts effective data processing techniques that become more obvious as I use them more. Expansion to analysis automation Ultimately, I aim to create a system that can automate, to some extent, the workflow from SQL creation to analysis of extracted data!
アバター
はじめに こんにちは! KINTOテクノロジーズのプロジェクト推進グループでWebエンジニアをしている亀山です。 フロントエンドを勉強中です。 モダンなWeb開発においては コンポーネント指向 が主流となっています。UIを再利用可能な部品に分割することで、開発効率や保守性が向上します。 Web Components と Tailwind CSS は、どちらもコンポーネント指向のフロントエンド開発を支援する強力なツールです。 Web Componentsは、標準仕様に基づいてカプセル化された再利用可能なカスタム要素を作成できる近年注目を集めている技術です。一方、Tailwind CSSは、ユーティリティファーストのアプローチで高速なUIスタイリングを実現するCSSフレームワークです。最近だとTailwind CSSもパフォーマンスが向上したv4が登場しておりアップデートも活発です。 一見すると、これらの技術は相性が良いように思えるかもしれません。「コンポーネントごとにカプセル化されたマークアップとロジック(Web Components)に、ユーティリティクラスで手軽にスタイルを当てる(Tailwind CSS)」という組み合わせは魅力的、、、だと思っていました。 いざ開発を始めると、どうしてもうまくいかない。調べていくとWeb Componentsの根幹をなす Shadow DOM と、Tailwind CSSのスタイリングメカニズムには、お互いの根本的な思想が衝突していることがわかりました。本記事では、特にShadow DOMの観点から、両者がなぜ相性が悪いのか、そしてなぜ併用するべきではないのか私が勉強したことをまとめていきます。 Web Componentsについて Shadow DOMとは何ぞや まずWeb Componentsは主に以下の3つの技術から構成されてます。 Custom Elements: 独自のHTML要素(例: <my-button> )を定義するAPI Shadow DOM: コンポーネント内部のDOMツリーとスタイルを、外部から隔離(カプセル化)する技術 HTML Templates: 再利用可能なマークアップの断片を保持するための <template> 要素と <slot> 要素 この中でも、Tailwind CSSとの相性の悪さに直結するのが Shadow DOM の存在です。 Shadow DOMを要素にアタッチすると、その要素は Shadow Host となり、内部に隠されたDOMツリー( Shadow Tree )を持ちます。Shadow Tree内の要素に対するスタイルは、原則としてShadow Treeの外部(メインドキュメントや親のShadow Tree)からは影響を受けません。逆に、Shadow Treeの外部で定義されたスタイルも、原則としてShadow Tree内部には適用されません。 これは、コンポーネントのスタイルが外部のCSSルールに汚染されたり、コンポーネント内部のスタイルが外部に漏れ出たりするのを防ぐ、強力な スタイルのカプセル化機能 を提供します。これにより、異なるCSS設計手法が混在する環境でも、コンポーネントの見た目が予期せず崩れるといった心配がなくなります。 下記に併用した場合のソースコードの例を記載します。 class MyStyledComponent extends HTMLElement { constructor() { super(); // Shadow DOMをアタッチ(openモードで外部からアクセス可能に) const shadowRoot = this.attachShadow({ mode: 'open' }); // Shadow DOM内部のHTML構造 const template = document.createElement('template'); template.innerHTML = ` <div class="container mx-auto p-4 bg-blue-200"> // Tailwindクラスを使用 <p class="text-xl font-bold text-gray-800">Hello from Shadow DOM!</p> // Tailwindクラスを使用 <button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Click me </button> </div> `; shadowRoot.appendChild(template.content.cloneNode(true)); // ★ ここが問題 ★ Shadow DOM内部にスタイルを適用するには...? // 外部のスタイルシートは原則届かない } } customElements.define('my-styled-component', MyStyledComponent); 上記の例のように、 MyStyledComponent の Shadow DOM 内で <div class="container mx-auto p-4 bg-blue-200"> のようなTailwindクラスを使用しても、デフォルトではこれらのスタイルは適用されません。 Tailwind CSSについて ユーティリティファーストとグローバルCSS Tailwind CSSは、 flex , pt-4 , text-blue-500 といった低レベルなユーティリティクラスをHTMLに直接記述することで、高速にUIを構築するアプローチを採用しています。 ビルドプロセスにおいて、TailwindはプロジェクトのHTML、JavaScript、TypeScriptファイルなどをスキャンし、使用されているユーティリティクラスに対応するCSSルールを生成します。生成されたCSSは、通常、 グローバルな単一のスタイルシート として出力され、HTMLドキュメントの <head> などに読み込まれます。 例えば、HTMLに <div class="flex pt-4"> があれば、Tailwindは以下のようなCSSルールを生成し、グローバルスタイルシートに含めます。 /* Tailwindによって生成されるCSS(の例) */ .flex { display: flex; } .pt-4 { padding-top: 1rem; } このTailwindのスタイリングメカニズムにおける重要な点は、CSSルールが グローバルスコープ で定義されるという点です。 2つの絶望的な相性の悪さ Shadow DOMのカプセル化 vs. Tailwindのグローバルスタイル ここで問題の核心部分です。 Shadow DOM は、内部の要素に外部のスタイルが適用されないように カプセル化 する Tailwind CSS は、使用されているユーティリティクラスに対応するCSSルールを グローバルスコープに生成 する この二つは根本的に矛盾します。Tailwindがグローバルに生成した .flex { display: flex; } のようなCSSルールは、Shadow DOMの境界を越えてShadow Tree内の要素に到達しないのです。 先ほどのTypeScriptの例で、 <div class="container mx-auto p-4 bg-blue-200"> にTailwindのスタイルが当たらないのは、これらのクラスに対応するCSSルールがShadow DOMの外部(メインドキュメントのグローバルスコープ)に存在し、Shadow DOMがそのルールの適用をブロックしているからです。 Tailwind CSS v4について補足: Tailwind CSS v4では、新しいエンジンによるパフォーマンス向上などが謳われていますが、基本的なスタイリングのメカニズム(プロジェクトファイルをスキャンしてユーティリティクラスに対応するCSSをグローバルに生成する)という点では変わりません。したがって、v4を使用してもShadow DOMとの相性の悪さは解消されません。 どうにかできんのか?(解決策はあるのか?) この問題を解決するために、色々調べていると、この衝突の回避策はあるにはあるが、どれもWeb ComponentsやTailwind CSSのメリットを損なう、あるいは実装コストが非常に高いものになり、根本的な解決策は見つかりませんでした。苦し紛れなものですが回避策をいくつか紹介します。 ビルドしたTailwindのCSSをShadow DOM内にコピー&ペーストする 各Web ComponentのShadow DOM内に、そのコンポーネントで使用しているTailwindクラスに対応するCSSルールを手動、あるいはビルドツールで抽出して <style> タグとして埋め込む方法です。 デメリット: 非常に手間がかかり、メンテナンス性が低い コンポーネントごとに重複したCSSを持つことになり、ファイルサイズが増大する TailwindのJITコンパイル(使っているクラスだけを生成する)のメリットが活かせない Tailwindの運用ワークフロー(設定ファイル、プラグインなど)と乖離する Shadow DOMを使用しない Web ComponentsでShadow DOMを使わず、Light DOMに要素を配置する方法です。この場合、要素はメインドキュメントのDOMツリーの一部と見なされるため、グローバルなTailwindスタイルが適用されます。 デメリット: Web Componentsの最大のメリットである「スタイルのカプセル化」が失われ、外部のCSSがコンポーネントに影響を与えたり、コンポーネントのスタイルが外部に漏れ出たりする可能性が生じてコンポーネントの独立性が損なわれる これらのアプローチを見てもわかるように、Shadow DOMによる強力なカプセル化と、グローバルスタイルシートに依存するTailwind CSSは、根本的に思想が異なるため、無理に併用しようとするとどちらかの技術のメリットを大きく損なうことになります。 結論:Web ComponentsとTailwind CSSは併用するべきではない これまで見てきたように、Web Components(特にShadow DOMを利用する場合)とTailwind CSSの併用は、両者のメリットを打ち消し合ってしまうため、基本的には避けるべきです。 その理由は、2つの技術が持つスタイリングの 根本的な思想・仕組みが衝突 するからです。 Web Components (Shadow DOM) は、コンポーネントのスタイルを外部から完全に**隔離(カプセル化)**することを目的としている 一方、 Tailwind CSS は、ユーティリティクラスに対応するCSSを グローバルなスタイルシート として生成し、ページ全体に適用することを前提としている このため、Tailwindが生成した便利なユーティリティクラスのスタイルは、Shadow DOMの強固な壁を越えることができず、コンポーネント内部には適用されません。 回避策は存在するものの、いずれもコンポーネントの独立性を犠牲にしたり、開発の複雑さを増大させたりと、本末転倒な結果を招きがちです。それぞれの技術の長所を最大限に活かすためには、併用しないという選択が賢明と言えるでしょう。 今回の記事が、Web ComponentsとTailwind CSSの併用を検討されている方の参考になれば幸いです。
アバター
Hello, I am Udagawa, an engineer working on Prism Japan . I would like to introduce you to our marketing initiatives that use React Email to automatically send emails. Challenges We Faced in Our Marketing Initiatives Prism Japan was launched in August 2022, and since the beginning of the service, it has acquired users through various marketing initiatives. However, there is no guarantee that once we acquire users, they will continue to use the service. Although it has been about two and a half years since the service started, the number of dormant users is still on the rise. To address this issue, we implemented a re-visitation (re-engagement) initiative using push notifications, but we faced several challenges. Push notifications do not reach users who have turned off their notification settings. Even if we send push notifications encouraging users to revisit the app, they do not reach users who have uninstalled it, so we cannot achieve the desired effect. In fact, the push notification consent rate at only about 48%, and considering this rate along with uninstalled users, the number of users who actually receive notifications is quite limited. Furthermore, because they receive notifications from other apps as well, ours tend to get buried among them. In this way, there were limits to the effectiveness of our re-visitation initiative using push notifications. On the other hand, we ask users to register their email addresses at the time of their membership registration. The consent rate for emails registered in this way remains at a very high level of about 90%. Even if users have deleted the app, emails can still reach those who have not canceled their membership, making this a suitable marketing channel for the re-visitation initiative. However, from an operational perspective, there were several challenges to this initiative. First, marketing resources were limited, with a single staff member handling a wide range of tasks from planning initiatives to managing social media. Creating email content requires a lot of man-hours for manually tabulating rankings, selecting appropriate images, designing layouts, and so on. Considering the limited resources of the marketing staff, frequent delivery was difficult. Therefore, although we recognized frequent email delivery as an effective marketing method, it was not realistic due to the operational burden. Using React Email To Automate Email Creation Thus, we came up with the idea of automating the entire process from email creation to delivery . If we can build a system that automatically collects the information to be displayed in the content, creates email content automatically, and sends emails automatically at scheduled dates and times with a predetermined layout, we can send emails tailored to users even with limited human resources. However, we, as engineers, struggled with how to implement the process of automatically creating HTML emails. If we implement processing that directly manipulates HTML, reusability will become low, and issues such as differences in display depending on the receiving mailer will occur. Looking ahead to future content replacement, it is necessary to implement a solution with high reusability that allows flexible addition of new content. Amid these challenges, we discovered a library called “React Email.” This “React Email” has the following features: Ability to create HTML emails using JSX Real-time preview function High reusability through componentization What is especially important is that reusable componentization allows for easy addition of new content when its creation is required. Because React Email is written with React, dynamically replacing the content becomes easier. These advantages enable the delivery of personalized content at low cost by dynamically replacing content based on user behavior and interests. Instead of sending the same content to all users simultaneously, delivering content tailored to each user's interests can be expected to achieve high revisit rates and improved engagement . By utilizing React Email, we gained a clear prospect of effectively resolving the challenges in our email delivery initiatives, enabling us to move forward with efficient user re-visitation initiatives. HTML Generation Using React Email From here, I will cover the implementation details. In the implementation, we use React Email to generate the HTML for emails. We adopted a process in which HTML is generated from JSX using the render function of React Email. First, we created the following components: import React from "react"; const AppCheckSection = () => { return ( <div style={{ padding: "20px 0", borderBottom: "1px dashed #cccccc" }}> <div> <p> 詳しいスポットの情報やアクセス情報はアプリで確認してみましょう。 <br /> 他にも、アプリではあなたにだけのおすすめスポットを掲載中! </p> <a style={{ padding: "10px 70px", background: "rgb(17,17,17)", borderRadius: "5px", textAlign: "center", textDecoration: "none", color: "#fff", display: "inline-block", marginBottom: "10px", }} > <span>アプリをチェック</span> </a> <br /> <a href="https://deeplink.sample.hogehoge/"> うまく開かない方はこちら </a> </div> </div> ); }; export default AppCheckSection; In this way, we created components for constructing emails. Then, simply combining the created components in the parent component completes the email template. import React from 'react'; import AppCheckSection from '../shared/AppCheckSection'; import FooterSection from '../shared/FooterSection'; import RankingHeaderSection from './RrankingHeader'; import RankingItems from './RankingItem'; export type RankingContents = { imageURL: string; name: string; catchPhrase: string; }; export type WeeklyRankingProps = { areaName: string; contents: RankingContents[]; }; const WeeklyRanking: React.FC<WeeklyRankingProps> = ({ areaName, contents }) => { return ( <div style={{ backgroundColor: '#f4f4f4', padding: '20px 0' }}> <div> <RankingHeaderSection /> <RankingItems areaName={areaName} contents={contents} /> <AppCheckSection /> <FooterSection /> </div> </div> ); }; export default WeeklyRanking; To generate the email HTML, React Email's render function is used. Using fetchRegionalRankingData, it is possible to obtain different content information for each residential area and create emails accordingly. import { render } from '@react-email/render'; import { WeeklyRankingEmail } from '../emails/weekly-ranking'; import { fetchRegionalRankingData } from './ranking-service'; export async function generateWeeklyRankingEmail(areaName: string): Promise<string> { const contents = await fetchRegionalRankingData(region); const htmlContent = await render(await WeeklyRanking({ areaName, contents })); return emailHtml; } The HTML generated by the render function is used as the e-mail body sent via the SaaS service's API. In batch processing, ECS is activated at the timing scheduled by EventBridge to execute the email creation and sending processes. The content of emails actually sent is like the following one: The images show content focused on the Kanto region, but we implement a system capable of flexibly changing the content according to the region set by the user. Therefore, if the user’s residence is Osaka, the ranking for the Kansai region will be delivered to the user by email. React Email has a preview function that allows us to proceed with email implementation just like when developing normally with React. Implementation without the preview would be extremely difficult, so this function was extremely helpful. By leveraging this function, we were able to proceed with implementation work while checking layouts with the marketing staff. Through componentization, we structured various elements such as footers and app launch promotion sections, in addition to ranking, as reusable parts. By mixing existing components also when creating new content, efficient and consistent email delivery becomes possible. Scheduled email delivery may result in repeatedly sending similar content, which can lead to a decline in user interest or, in the worst case, the emails may be marked as spam and rejected. Even in an automated system, the delivery of content that continuously attracts user interest should be required. Considering such a situation, we believe that a highly reusable design through componentization, which enables quick changes to the content to be delivered, is important. Effect of Automated Email Delivery As a result of starting automated email delivery using React Email and batch processing, the number of installations increased starting around the day we started the delivery (February 22). We believe that this has made dormant users who saw our emails become interested in the app and encouraged them to reinstall it . In addition, the number of daily active users (DAUs) around email delivery dates significantly increased and has shown a sustained upward trend since the start of the automated email delivery initiative. In this way, we succeeded in encouraging dormant users, including those who had uninstalled the app, to revisit. Summary By automated email delivery utilizing React Email, we succeeded in reviving dormant users and increasing DAUs without manual intervention. Many marketing staff may be struggling with the issue of having many dormant users and limited marketing resources in app development. Automation of email creation using React Email reduces the burden of coming up with email content weekly and enables efficient and effective marketing activities . Furthermore, we found "React Email” highly useful for continuously improving and quickly releasing content. We found that, even in today’s world with diversified communication methods, email delivery can still function effectively as a marketing channel if we deliver content aligned with user interests . If you're struggling with stagnant revisit rates or looking for ways to revive dormant users, this approach is definitely worth considering.
アバター
はじめに KINTO Unlimited appのクリエイティブを担当している中村屋です。この度、アプリにサウンドを実装することになったので、そのプロセスや考え方などをお話ししていこうと思います。 KINTO Unlimitedプロジェクトのビジネス担当の方から、アプリを継続利用するユーザーを増やすべく、「つい気持ちよくなって続けてしまうような音」を入れたりできないですかねと軽い相談(クリエイティブに完全にお任せ!)がありました。 サウンドを搭載したアプリは様々にありますが、なんかいい感じって思うアプリってサウンドも洒落てますよね?ここぞとばかりに「 ほほう、あのサウンドデザイナー/アーティストに依頼して、、 」と思ったのも束の間、オリジナルで作る予算はないよと。。大きく思い描いたのに残念。。 とはいえ、クオリティは譲れなかったので、品質が高いと言われるサウンドサービスを色々調べた結果、Spliceという有料サービス( https://splice.com/sounds )を利用することにしました。 メインの業務の傍らの案件かつ、経験があまりない領域ではありましたが、 膨大にあるサウンドからどうやって選んで組み立てるのか? サウンド実装までのデザインプロセスを知りたい クリエイティブが開発にどう関わってんの? と思う方はぜひ見ていってください。 Unlimitedのサウンド世界観は? まずはディレクションです。 ここが後工程に大きく響いてくるキモとなる部分です。アプリのサウンド世界観を言語化し説得力のある進め方にすること、サウンドの検討に判断軸を持ち、膨大な時間をかけないようにするためにも必要です。 スコープを定義:実験的な実装という位置付けのため、最小単位での特定の一連の体験に実装します。ユーザー操作のフィードバックSE(Sound Effect)、BGMをターゲットとします。 Unlimitedのサービスは、購入後のクルマが、技術とともにアップグレードしていく新しいクルマの持ち方であり、そのキーワードは 未来的・革新的・最適化・スマート・安心感 です。その「らしさ」をサウンドに込めます。 ここから連想するサウンドは、「 環境に溶け込みながらもモダンで心地よいデジタルサウンド 」( 仮説 )でした。アイデアや表現の幅を狭めないよう、遊びが効くレベルで仮説コンセプトを立てていきます。心が落ち着く要素もありながら、クールでハリのある感じ、、とイメージを膨らませながらサウンドを検索していきます。 そしてすぐに、これではNGだと気づきます。音のプロでない者が感覚で選んだサウンド群が、調和の取れた一貫性のあるものになるわけがないと。しかし、、ありました!質を担保し、効率的な方法が。 Spliceには サウンドパック が提供されていて、ゲームやUIなどのアプリ向けのパックがあったのです。そこで、モダンでSFの要素がありつつ心地よいテーマを持つサウンドパックを選び、サウンド候補を選んでいきます。そして、Adobe Premiere Proを用いてアプリの操作動画にSEを当てこみ、さらに候補を絞り込みます。 :::message Tips:制作サウンドデザイナー名がクレジットに入っているサウンドパックが特に優れています。コンセプトが一貫して明確、音質・音量が安定(ノーマライズ)していて、余計な調整なく実装しやすいと感じました。 ::: 方向転換 完成度を高めず、早めにプロジェクトメンバーにバリエーションを聞いてもらって、方向性の意見をもらいます。基本はいいねという意見でしたが、「いいんだけど、もうちょっと俗的な感じの方がいいのかな?」という意見に目が止まりました。 アプリ・サービスに深く関わっているメンバーからの意見です。この感覚をクリエイティブが拾い上げ、言葉にできない違和感を解釈する必要があると思いました。 俗的=一般的、洗練されていない、ありきたりということですが、言葉通りに受け取らず、デザイン的に思考します。洗練された未来的なサウンドは適していない→ユーザーに提供する価値はそこではない→ ビジョンよりもリアルなユーザー に寄り添い、共感を呼ぶべきであると解釈します。 冒頭の「アプリ継続利用促進のため」にもあるように、初心者向けコンテンツやゲーミフィケーションをベースにした施策などをアプリでは行ってきており、一方的な価値提供ではなく、リアルなユーザーにフォーカスした利用促進を行っています。 当初考えたコンセプトは間違いではないが、アプリコンセプトが緩やかに変化しており、それに伴ったアップデートが必要ということがわかりました。そこで、サウンドコンセプトを「 最新の技術を親しみやすく、共に成長していく安心感のある体験 」 の提供 と再構築しました。 このコンセプトで再度サウンドを練り直したサンプルの一部がこちら。 https://www.youtube.com/watch?v=oeGNNqRJs50 自分の記憶にあるような親しみのあるサウンド、遊び心が効いてクセになりそうなイメージになったのではないでしょうか。 実装する前に 決まったサウンドデータをエンジニアさんに渡してあとはよろしく!では終わりません。ユーザー体験を形作る上で、ここからの設計フェーズも非常に重要です。 例えば、アニメーションの見せどころの視覚変化にサウンドが密接に同期するととても気持ちいいですよね(例:コインがキラッと光った瞬間に音が鳴る)。逆に、ここにズレがあると違和感が生まれ、ストレスを与えます。 また、ボタンを押した際に鳴るSEを考えた時に、押した瞬間0.00秒ジャストに鳴ると硬い印象になり、数十ミリ秒のわずかな遅延再生させるとより自然で洗練された印象になります。※テーマによって考え方が変わります。 このような考え方を取り入れて、どこで・いつ・どのように再生されるのかを、再現性を担保できるように仕様書にまとめます。(まずは、フィジビリを考えすぎずユーザー体験の理想として落とし込んでいきます)特にサウンドの専門アプリではないので、専門的な概念まで踏み込まず、以下のように実装仕様書をまとめます。 管理ID/サウンドファイル名/対象画面 再生トリガー:「〇〇ボタンタップ時」「△△アニメーション表示時」など、どのようなユーザー操作やイベントで音が鳴るかを明記。 ループ再生の有無 音量:BGMやキャンセル音などは抑えめになど、サウンドの意味や関係性を元に設計。 遅延再生:この項目はトリガーを起点として再生のタイミングを調整できるので、トリガー内容が複雑になるのを防ぎます。 フェードイン:音の始まりの調整、SEとBGMの競合回避に役立てることもできます。 フェードアウト:BGMが突然途切れるのではなく、余韻を残して停止させると丁寧な印象です。 備考:再生タイミングの意図など、疑問が生まれないように記載していきます。 そして、データについてです。アプリがインストールされるデバイスはユーザーのもの、つまりユーザーデバイスに負荷をかけないよう、アプリ容量には気を配らなければなりません。以下のデータ仕様は最上位品質ではないものの、高品質なラインで定めています。 SE:WAV形式 または AAC形式* BGM:AAC形式 *重要なサウンド(ブランドSE)や頻度の高いSEはWAV推奨、200KB超え+1秒以上のSEはAACを検討 AAC圧縮後の基本ライン:ステレオ音源256 kbpsの可変ビットレート(VBR)、サンプリングレート44.1/48kHz SEは瞬間に再生される用途なので、データがそのまま再生されるWAV(非圧縮・最高品質)が適しており、AAC(圧縮)は再生にデコード処理が走るためほんの少し遅延が起きるようです。※近年のスマートフォンの処理では、プロ以外にはその差は感じられないと思われますが。 この他にもオーディオの割り込み、プリロード(事前メモリ読み込み)など事細かに定義しなければならないこともありますが、ある程度のところでプロデューサー・エンジニアさんと共有し、詳細を詰めていきます。よくわからないところは悩む前に知見のある人と一緒に前に進める、内製開発のメリットです。 おわりに 開発の内容としてはまだ続きますが、一つの区切りとして、ここまでとさせていただきます。 熟知しない領域でここまで進められたわけは、ChatGPTをはじめとしたAIの活用でした。必要な観点を洗い出し、壁打ち相手として利用していき、説得力のある形になるまで考えを深めることができました。しかし、サウンド理論など掘っても掘っても全く底が見えない。。そこで、私には 社内で共通認識を持つことのできる範囲での定義 をすることが重要でした。専門的になりすぎず、プロジェクト内で理解されやすい仕様書を作ることやコミュニケーションに気を配っています。(例えば、音量はdBFS値を使わず、基準点を設けて相対スケール値で表し、理解しやすい0.0-1.0の数値で定義するなど) それでもなお、サウンドは非常に奥深く、ここでは欠けた内容も多いことは承知しています。また、音楽は人によって(もっというとその時の精神状態によって)感じ方が異なる感性の塊のようなものです。そういう類のものをユーザー体験の中に落とし込んでいったプロセスを紹介しました。 最後に、KINTOテクノロジーズの開発ではMVP(Minimum Viable Product)の考えが浸透していますので、共感を得られれば、アイデアをスピーディに組み立てて開発まで進めることができます。そして、ユーザーの反応を見ながら、アップデートを繰り返していくことができます。これはその一つの事例でもあり、そのような開発にクリエイティブがどう関わっているか、その一端を感じていただけたなら嬉しく思います。最後までお読みいただき、ありがとうございました。
アバター
I'm feeling a bit nervous writing this blog after ages. I'm Sugimoto from the Creative Office at KINTO Technologies (KTC for short). In 2024, on our third year since the company was founded, we gave our corporate website a full redesign. Three years after our founding, the project began when the HR team requested a new recruitment-focused website to help attract more people to join us in the future. Since the corporate website is centered around recruitment, we interviewed not only the Human Resources team but also members of management to understand the company's direction, as well as engineers from the Developer Relations Group to capture voices from the front lines. Questions like “What kind of people does the company truly want?” and “Who do we genuinely want to work with?” guided our conversations. As we listened to various perspectives—along with their challenges and aspirations—the purpose of the corporate website gradually came into focus. "Let's create a website that shows what KTC is all about to engineers and creators who stay curious about technology, keep up with the latest trends, and take initiative." That goal shaped our concept. The concept is "The Power of Technology & Creativity." We picked this word to reflect our drive to lead Toyota's mobility services through technology and creativity. Setting a concept might feel like an extra step, but it gives everyone a shared point to return to, especially important when different roles are involved and the project starts to drift. "Do we really need that feature?" "Can't we make it more engaging?" With this concept in place, even questions from a different angle make it easier to say, "That’s why we'll do it." Or, "That's why we won't." Personality Settings The next step for us was to define a brand personality, a clear picture of what kind of person the company would be, and how it would behave if it were human. (More on brand personality below .) Creating a brand personality from the ground up takes time and effort, often requiring input from across the company. However, since the main goal of launching the corporate website was recruitment, speed was a priority. So we built on what was already in place within our company: our vision, values, culture, and work attitude. The personality we landed on for KTC is, simply, "creator." As creators, we define ourselves as those who use technology and creativity to build the best products for our users: products that are intuitive, clear, thoughtful, and useful. Creating an Exciting Mood Board With the brand personality set, the next step is figuring out how to reflect that in the design of the corporate website. So, one more step! Before the lead designer jumped in, the whole Creative Office came together to build a mood board. This gives us a visual anchor to return to; just like the concept itself, which helps keep things on track and makes the rest of the process smoother. Each designer brought in visuals they felt captured the KTC vibe, and the mood board session turned into a lively exchange. Creating a mood board also led to some new discoveries. I imagined the output would reflect the vibe of a shiny, fast-paced California tech company. But when we shifted our perspective to ask, 'Who are we, really?', the answer became clear: we are (or aspire to be) a professional engineering group that embraces the spirit of Japanese craftsmanship rooted in the Toyota Group’s gemba philosophy. The mood board was inspired by globally recognized modern systems and our defined brand personality. Our goal was to create a corporate website that offered high usability while visually expressing our brand identity. Achieving a Jump in Creativity and Efficiency By clearly defining the website’s "personality," "mood," and "purpose," everything came together with a strong sense of consistency—from the photo tone and interview content to the copywriting and implementation. It really highlighted how that clarity can enhance both creativity and efficiency. It also made it easier to explain the design logically to non-designers, helping us put even abstract ideas into words. Honored to Receive International Recognition Our newly redesigned corporate website has received several international web design awards, including the prestigious CSS Design Awards. We'd love for you to take a look. And if something clicks, we hope it sparks your interest in us! Check out the website here! https://www.kinto-technologies.com/ ※What is brand personality? It represents what kind of traits and personality a brand (or company) would have if it were a person. This is called its archetype. We use a common framework that breaks "personality" into 12 types. This helps us explore a company's character, thinking, behavior, and distinctive features. Having a clear brand personality makes it easier to present a consistent image. Even if the audience isn’t consumers, you can still leave a strong and unified impression on your target—whether it’s through a corporate website like this or event giveaways.
アバター
こんにちは!KINTOテクノロジーズ株式会社の大阪採用担当、Okaです。 このたび、私たちOsaka Tech Labは新しいオフィスに移転しました。この記事では、その舞台裏と新オフィスの魅力をお届けします! Osaka Tech Labとは Osaka Tech Labは、2022年に心斎橋で開設した西日本のエンジニアリング拠点です。このたび、JR大阪駅直結のビルに移転し、さらにアクセスが良くなりました。 ソフトウェア開発、クラウドインフラ、データ分析など、さまざまな分野のエンジニアが集まり、自社プロダクトの開発・改善に取り組んでいます。 みんなで作り上げた「Osaka Tech Lab 2.0」 コンセプト誕生の経緯 オフィス移転をきっかけに、「Osaka Tech Lab 2.0」プロジェクトがスタート! このプロジェクトは、最初から誰かが用意していたものではありません。メンバー自身が「こんな場所にしたい」と想いを持ち寄って、みんなで作り上げたものです。 その中で生まれたのが、「集GO!発SHIN!CO-LAB」というコンセプト。 「単なる業務スペースではなく、大阪らしさや文化を活かしながら、新しい価値を“みんなで創っていく”場にしたい。」そんな気持ちを込めて、これまでの活動を振り返りながら、みんなで名前をつけました。 ![](/assets/blog/authors/oka/osakarenewal/1.png =600x) 「この指とまれ」という文化 Osaka Tech Labでは、もうひとつ、私たちらしい合言葉が生まれました。それが「この指とまれ」です。やりたいことがある人が、「やってみたい」と声をあげる。そこに、「いいね」「一緒にやろう」と自然に人が集まってくる。そんな場面が、私たちの周りではよくあります。 この動き方を、みんなで「この指とまれ」と呼ぶようになりました。 ![](/assets/blog/authors/oka/osakarenewal/2.png =600x) 実行委員会形式で進めた新オフィスづくりも、この「この指とまれ」スタイルがきっかけ。誰かが声をかけて、そこに集まったメンバーで、一緒に手を動かしながら作り上げてきました。そんな想いが詰まった、新しいオフィス。ここからは私たちのオフィスの一部をご紹介します! 新オフィスの魅力をご紹介! ![](/assets/blog/authors/oka/osakarenewal/3.png =600x) オフィスの床には、会議室へと続く道路のラインがあしらわれています。 🛝 PARKエリア | 靴を脱いで、ほっと一息 ![](/assets/blog/authors/oka/osakarenewal/4.png =600x) 靴を脱いで、ゆったり過ごせる土足禁止のリラックス空間をつくりました。カジュアルなミーティングやちょっと一息つきたいときにぴったりの場所です。さっそく全社MTGでも、自然とみんなが集まるお気に入りの場所になっています。 ![](/assets/blog/authors/oka/osakarenewal/5.png =600x) 🚗 会議室の名前も、Osaka Tech Lab流 会議室には、ガレージやピットをモチーフにした名前をつけています。その中でも「モータープール」など、大阪らしさとモビリティを掛け合わせたユニークな名前も。 ※「モータープール」:大阪でよく使われている「駐車場」を意味する言葉です。 ![](/assets/blog/authors/oka/osakarenewal/6.png =600x) Slackでブレストを重ねる中で、雑談から自然と生まれたこのネーミング。みんなで楽しみながら決めた、“私たちらしい”名前になりました。 ![](/assets/blog/authors/oka/osakarenewal/7.png =600x) (ちなみに、大阪ならではのユーモアも交えながら、アツい議論が繰り広げられながら決まりました!) ![](/assets/blog/authors/oka/osakarenewal/8.png =600x) 🛣️OSAKA JCT KINTOの室町オフィス同様「OSAKA JCT」という、発信スペースも誕生しました。壁のデザインはOsaka Tech Labのデザイナーが、みんなで考えたコンセプトをカタチにした、自慢のクリエイティブです。 ![](/assets/blog/authors/oka/osakarenewal/9.png =600x) オフィスの開所式は、このJCTを活用しながら「この指とまれ」でメンバーを募り、実行委員会形式で企画・運営しました。移転式もメンバー主導で進め、マネージャー陣を招いて社内の決起会を実施。すべてが「みんなで作った」手作りのイベントでした。 ![](/assets/blog/authors/oka/osakarenewal/10.png =600x) 新オフィスについて、メンバーからはこんな声も届いています。 仕事へのモチベーションが自然と上がり、背筋が伸びる感覚になります。 共有スペース「PARK」は開放感があり、大人数でも自然と集まれる心地よい場所。 モビリティをモチーフにした工夫がオフィスのあちこちに。場所の名前や標識、道を模した床、タイヤの机、クルマ型の移動式ベンチなど、細部にまで遊び心が散りばめられていて、歩いているだけでわくわくします。 開所式でメンバーにインタビューしたところ、「自分たちの声がオフィスに反映されているのが嬉しい」「”自分たちの場所”として愛着が持てる」といった声がたくさん届きました。 Osaka Tech Labで感じたこと 実は、この「みんなでつくる」という空気は、旧オフィスの頃から変わっていません。 ![](/assets/blog/authors/oka/osakarenewal/11.jpeg =600x) 旧オフィスの閉所式は、みんなでお酒を持ち寄って乾杯する、あたたかくてゆるい飲み会でした。部署も肩書きも関係なく、ふらっと集まって、気づけばわらわらと飲み会が始まっている——そんな文化が、Osaka Tech Labには自然と根付いています。 採用担当として、この距離感や、自分たちの声を大事にできる文化こそ、Osaka Tech Labの大きな魅力だと感じています。新オフィスになった今も、この雰囲気はきっと変わりません。 これからも、未来のことを気軽に語り合える、そんな場所であり続けたいと思っています。 一緒に「集GO!発SHIN!CO-LAB」しませんか? イベント開催情報 Osaka Tech Labでは、私たちのカルチャーを体験できるイベントを定期的に開催しています。「この指とまれ」にピンときた方は、ぜひ気軽に遊びにきてください。コンセプトの具体的な取り組みとして、Osaka Tech Labのメンバーが日々の開発で得た知見やノウハウを共有するイベント「CO-LAB Tech Night」を開催いたします。 CO-LAB Tech Night vol.1 , 全部内製化 大阪でクラウド開発やってるで! #1 開催日時:2025年7月10日(木) 19:00-21:30 概要:クラウド開発をテーマに、クラウドインフラ、SRE、データ分析基盤を取り上げ、Osaka Tech Lab のメンバーが、現在の取り組みや、そこから得た知見を共有します。 詳細: https://www.kinto-technologies.com/news/20250702 CO-LAB Tech Night vol.2 , Cloud Security Night #3 開催日時:2025年8月7日(木) 19:00-21:30 概要:AWS、Google Cloud、Azureなどのマルチクラウド環境におけるクラウドセキュリティに関する話題を中心に、各社の取り組みを通じて、クラウドセキュリティの知識を深めるイベントです。今回は、東京で開催している「Cloud Security Night」の第3回目を大阪で開催いたします! 詳細: https://www.kinto-technologies.com/news/20250709 ![](/assets/blog/authors/oka/osakarenewal/12.png =600x) 最新情報はOsaka Tech Labの特設サイトで! Osaka Tech Labでは、今後もエンジニアリング・クラウド・データ分析など、さまざまなテーマでイベントやTech Blogを通じて発信を続けていきます。イベント情報は、Osaka Tech Lab特設サイトにて随時更新予定です。気になる方は、イベント一覧(CO-LAB events)からぜひチェックしてみてください! ▼Osaka Tech Lab 特設サイトはこちら https://www.kinto-technologies.com/company/osakatechlab/ ![](/assets/blog/authors/oka/osakarenewal/13.png =600x) (Osaka Tech Labの特設サイトも、「この指とまれ」から生まれました) この記事と同時に公開されるOsaka Tech Labの特設サイトも、「この指とまれ」から生まれた取り組みのひとつです。 「もっと発信したい」「大阪のリアルな雰囲気をもっと届けたい」そんなメンバーの声から自然と人が集まり、企画・デザイン・執筆・公開まで、みんなで手を動かしながら作り上げました。東京のクリエイティブ室のメンバーも巻き込んで、まさにCO-LABで実現した、大阪らしい挑戦です。この特設サイトにも、私たちのカルチャーがぎゅっと詰まっています。ぜひ、のぞいてみてください。 カジュアル面談も実施中 「ちょっと話を聞いてみたい」「Osaka Tech Labの雰囲気をもっと知りたい」という方、どうぞお気軽に下記URLからお申し込みください! https://hrmos.co/pages/kinto-technologies/jobs/1859151978603163665
アバター
Hello. My name is Hoshino, a member of the DBRE team at KINTO Technologies. In my previous job, I worked as an infrastructure and backend engineer at a web production company. Over time, I developed a strong interest in databases and found the work of DBRE especially compelling, so I decided to join the DBRE team at KINTO Technologies in August 2023. The Database Reliability Engineering (DBRE) team operates as a cross-functional organization, tackling database-related challenges and building platforms that balance organizational agility with effective governance. Database Reliability Engineering (DBRE) is a relatively new concept, and only a few companies have established dedicated DBRE organizations. Among those that do, their approaches and philosophies often differ, making DBRE a dynamic and continually evolving field. For examples of our DBRE initiatives, check out the tech blog by Awache ( @_awache ) titled Efforts to Implement the DBRE Guardrail Concept , as well as the presentation at this year's AWS Summit and p2sk's ( @ p2sk ) talk at the DBRE Summit 2023 . In this article, I'd like to share a report on the DBRE Summit 2023, which was held on August 24, 2023! What is DBRE Summit 2023? This event is for learning about the latest DBRE topics and practices, as well as networking in the DBRE community. A total of 186 people signed up in advance via connpass , both online and offline, and many of them also participated on the day. Thank you to all the speakers and attendees for taking the time out of your busy schedules to help make the DBRE Summit a success! Linkage's Initiatives to Making DBRE a Culture, Not Just a Role Taketomo Sone/Sodai @soudai1025 , Representative member of Have Fun Tech LLC, CTO of Linkage, Inc., and Co-organizer of the DBRE Users Group (DBREJP) @ speakerdeck DBRE is not just a role, but a database-centered operational philosophy and a culture of maintaining databases as part of everyday product development activities. When a hero who can handle all databases emerges, it creates the risk of becoming overly dependent on that person. To prevent this, we should strive for a peaceful environment where stable operations don't rely on heroes. To achieve that, we need to build a strong organizational culture at the company level. While individual skill and enthusiasm are necessary, they alone can't build a culture. So, the first step is to create the environment. In addition, because design is directly linked to the security and operation of the database, there needs to be a culture in which developers practice DBRE. Database Reliability Engineering is a philosophy, and an operational style that aims to solve problems through systems rather than craftsmanship. DBRE focuses not on reacting to issues, but on preventing them in the first place. It's never too late to start! I realized that when putting DBRE into practice, it is very important to involve others rather than trying to do it all by ourselves. DBRE = Philosophy and Culture! To help build a company culture, I want to proactively engage in cross-functional communication! Current State of Mercari's DBRE and a Comparison of Query Replay Tools Satoshi Mitani @mita2 , DBRE, Mercari, Inc. Mercari's DBRE team was established about a year ago. Until then, the SRE team was in charge of the database. Initially, the system architecture consisted of just a monolithic API and a single database, but now it has been split into a monolith and microservices. The main responsibilities of the DBRE team include providing support for the databases owned by each microservice, answering various DB enquiries to resolve developers' concerns, and researching tools to increase productivity. When we started providing support for MicroService DB, we faced challenges, such as wanting to act proactively but not being able to see the issues easily and the DBRE team not being recognized. To address these, Developer Survey conducted, with multiple choice questions about what developers expect from DBRE DBRE Newsletter published every six months, with active communication from the DBRE team. These efforts have gradually raised awareness across the company, leading to an increase in requests. Other DBRE responsibilities include operational tasks related to the Monolith DB, and efforts toward modernization. To select a query replay tool capable of mirroring production queries, we defined key evaluation criteria and then conducted a survey. What is a replay tool? A replay tool reproduces production queries or traffic in a separate environment. It is used to investigate the impact of database migrations or version upgrades. Tools compared Percona query Playback A log-based, easy-to-use replay tool. MySQL-query-replayer (MQR) MQR is a tool built for large-scale replays, and you can really sense the creator Tombo-san's passion. I got the impression that the DBRE team is actively sharing organizational challenges through Developer Surveys and DBRE Newsletters. It was also very insightful to hear about the criteria and process used in evaluating replay tools. Introducing DBRE Activities at KINTO Technologies Masaki Hirose @ p2sk , DBRE, KINTO Technologies @ speakerdeck The DBRE team is part of a company-wide cross-functional organization called the Platform Group. The roles of DBRE are divided into two categories: Database Business Office Responsible for solving problems based on requests from development teams and stakeholders, as well as promoting the use of DBRE-provided platforms. Cloud Platform Engineering Responsible for providing database-related standards and platforms to promote effective cloud utilization while ensuring governance compliance. DBRE's activities are determined by defining four pillars and then deciding on specific activities based on the current state of the organization. Actual Activities Building a system to collect information on DB clusters DB secret rotation Validation: Aurora zero-ETL integrations with Redshift (preview) KINTO Technologies' DBRE team is building platforms to enhance the reliability of databases. To achieve this, we've chosen to solve the challenges through engineering. By using the cloud effectively to balance agility with database security and governance. By evolving these efforts into a company-wide platform, continue to drive positive impact on the business. We're proceeding these with an approach called Database Reliability Engineering. I was very impressed by how the team clearly defines the role of DBRE and leverages that definition to design organizational systems that both improve database reliability and contribute to the business. In the future, I hope to contribute to building even better systems based on the four DBRE pillars. Implementing DBRE with OracleDB: We tried it at Oisix ra daichi ~ Tomoko Hara @tomomo1015 , DBRE, Oisix ra daichi Inc. and Co-organizer of the DBRE Users Group (DBREJP) @ speakerdeck Among the many aspects of visibility that SRE/DBRE can provide, cost visibility tends to be overlooked. So, we're taking on the challenge of managing infrastructure costs across the entire company. Our approach involves reviewing the list of invoices to understand the actual state of the system and identify potential issues. Additionally, by evaluating cost-effectiveness, we contribute to improving business profit margins. Database costs make up a significant portion of overall infrastructure expenses. While databases are critical enough to warrant that investment, they must not be neglected or treated with complacency. To reduce database costs, we're implementing measures such as stopping databases used in development environments on days when they are not in use, and considering the most cost-effective approach. When using a commercial database, knowing the license type and its associated cost is very important in embodying DBRE. Conduct a license inventory to understand whether the licenses your company has contracted are appropriate. Take the time to think about how we can improve reliability, grow, and enjoy what we do, both now and in the future. By visualizing costs, many things become clear, so we encourage you to start by making costs visible as an approach to contributing to the business and improving reliability. It was very interesting to hear about cost visualization, which is something I don't often get to hear about. As mentioned in the talk, the database accounts for a large proportion of infrastructure costs and is a critical part of the system, so I felt it was very important to visualize it and evaluate its cost-effectiveness. Including cost aspects, I found it helpful and hope to contribute to solving such challenges as part of DBRE going forward. ANDPAD's Initiatives to Automate Table Definition Change Review and Create Guidelines Yuki Fukuma @fkm_y , DBRE, ANDPAD Inc. @ speakerdeck At ANDPAD, when a product team makes changes to table definitions, the DBRE team is responsible for reviewing them, and several issues have arisen in the process. For this reason, we felt the need to create a scalable mechanism to improve review efficiency. As part of our investigation, we decided to categorize the review comments from DBRE to the developers, and release small, incremental changes starting with those that we could address. We adopted this approach in order to get early results while moving forward. Automating Access Paths Although the database terms of use had already been created, it was hypothesized that they weren't being read much until they were actually needed. So, we created an access path that would display them at the necessary timing. As a result, the number of views increased and the frequency of comments during reviews decreased. Automating Table Definition Reviews A system was built to automatically review items that can be mechanically checked. This reduced the review costs for DBRE. By creating such a system, we not only improved review efficiency, but also made it possible to apply the process to products that had not previously been reviewed, enabling DBRE to automate table definition reviews. I found it impressive how the automation of access paths and table definition reviews made the process highly efficient and easy to use at the right time. This was very helpful and I hope to build something similar myself in the future. Michi Kubo @amamanamam , DBRE, ANDPAD Inc. @ speakerdeck A story about creating a course of action to ensure that table definition changes are implemented uniformly and of higher quality in production by all teams. One of the issues was that the quality of validation during table definition changes varied between teams, leading to migrations being carried out without sufficient validation, potentially causing service disruptions or failures. To address this, we conducted interviews and analyzed the causes. We then created clear guidelines to ensure the quality of validations. Overview of the guidelines Create a list of tasks to be completed before the actual executing Create a list of items to be included in pull requests Create a flow for considering release timing As a result of implementing these guidelines, validation results became more comprehensive and unified. I found it impressive how the team clearly identified the issues and organized guidelines and processes to improve quality, which helped raise awareness across the team and enhance reliability. As a DBRE team member, I'd like to organize guidelines in a way that motivates the whole team to empathize with the issues and collaborate in solving them. Panel Discussion: "The Future of DBRE" Taketomo Sone/Sodai @soudai1025 Satoshi Mitani @mita2 Tomoko Hara @tomomo1015 What's the best way to get started with DBRE? It might be a good idea to start by setting a goal and then determining what to do based on that. Identifying challenges and working to build a culture around addressing them is important. Database standardization might be a good topic to tackle first. What unique skills are required to practice DBRE? Since DBRE activities span across different teams, communication skills are essential. You need a personality that can respond positively under pressure. The ability to build trust is important. What makes DBRE an attractive career? This will enhance your DB expertise. Since the core technologies of databases don't change rapidly, the knowledge and experience you gain can be used for a long time. It'll broaden your perspective beyond databases to include applications as well. What are you looking to work on in the future? I'd like to engage in community activities as a DBRE. I'd like to accumulate more success stories as a DBRE. I hope DBRE will become a more widely recognized. I was a bit surprised to learn that DBRE requires more than just database knowledge. Of course, database knowledge is essential, but I realized that communication skills and a positive mindset are just as important for building a cross-organizational culture. I personally hope that DBRE becomes a role more and more people aspire to. Summary So, how was it for you? DBRE itself is still a developing field, and only a limited number of companies have adopted it so far. That's why the DBRE Summit was such a valuable opportunity to learn about the DBRE initiatives of various companies. Having recently transitioned from backend engineering to DBRE, I'm not yet a database specialist. However, through this summit, I came to recognize that working on database improvement tasks and building cross-functional cultural foundations are also important activities of DBRE. https://youtube.com/live/C2b93fgn05c
アバター
Hello Hi there—this is MakiDON, joining the company in December 2024! In this article, I asked our December 2024 new joiners to share their first impressions right after joining. I've put their thoughts together here. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview! Fsk Self-introduction I work on frontend development in the Business System Group, part of the Business Systems Development Department. So far, I've been doing frontend using Nextjs, always aiming to build user-friendly interfaces. There's still plenty for me to learn, but I'll do my best to be helpful in any way I can. How is your team structured? There are five of us, including me. We've got one PM, two front-end engineers, and two back-end engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Using generative AI tools like Copilot and ChatGPT has been a huge help. I was a bit nervous before joining, but everyone was so warm and welcoming that I quickly felt at ease. What is the atmosphere like on site? I really appreciate how easy it is to ask for help when I run into something. How did you feel about writing a blog post? I think it's great to have the opportunity to share my thoughts and feelings with everyone. Question from Frank to Fsk If you could hand off just one boring daily task to a robot, what would it be? Definitely, cleaning! It eats up time every day, and I'd much rather spend that time doing something else. Takahashi Self-introduction I work as a project manager for the Owned Media Group and the Marketing Product Development Group. I focus on helping everyone move toward a shared goal—acting as a good partner to our clients and internal teams, and as a bridge between engineers and business divisions. In my previous job, I gained experience as a web designer. Later, I transferred to the Development Department, where I managed a range of platform-related areas, including membership systems, payments, points, and facility information. How is your team structured? The Owned Media Group has one project manager and two engineers. The Marketing Product Development Group focuses on static content and includes a team leader, a project manager, a tech lead, and two engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression was how quiet the office was. At my previous job, the sales team was on the same floor and right nearby, so it was always noisy. As for any gaps or surprises, I'd say that each group has its own development style. You kind of need to stay flexible and ready to adapt your mindset. What is the atmosphere like on site? It's quiet. So quiet that I feel like I need to be a little mindful when tossing a can into the trash. How did you feel about writing a blog post? During my self-intro at work, I think the only thing really came across was that I'm into Monster Hunter. So I'm glad to get the chance to write the article. Question from Fsk to Takahashi Do you prefer World or Wilds? lol I'd say Wilds, especially with all the upcoming updates to look forward to! Hoping it becomes something we can enjoy for over 10 years, just like World! Generative AI is currently being used in the design field, and many engineers are being called "AI prompt engineers." What do you think about this trend? As long as people are careful not to infringe on copyright or image rights, I think it's totally fine to let generative AI handle certain tasks. That said, I don't think it's suitable in contexts like contests or competitions where creativity is what's being judged. Lyu Self-introduction I currently belong to the Business System Group in the Business System Development Division, where I mainly work on backend system development. My day-to-day work involves designing, implementing, operating and maintaining various systems that help streamline internal operations and improve data integration. I always keep stability and scalability in mind when developing systems. Previously, I worked at IBM, where I was involved in developing medical information systems for major hospitals in Japan. I've had hands-on experience across the entire process—everything from requirements gathering and design to development, rollout, and after-sales support. I've always aimed to build systems that truly meet the needs of users on the ground. Drawing on that experience, I continue to work on deepening both my technical skills and understanding of the business so I can deliver systems that are even more practical and valuable. How is your team structured? There are five of us, including me. We've got one PM, two front-end engineers, and two back-end engineers. Everyone was a pro in their own area, and I learned a lot from being part of the team. What was your first impression of KINTO Technologies when you joined? Were there any surprises? The first thing that stood out was how warm and welcoming everyone was. There's a relaxed atmosphere where people communicate freely, without being overly concerned about hierarchy. I was also impressed by the wide range of in-house events and active club activities—there's always something going on. The benefits are really employee-friendly too, which makes it a great place to work. There wasn't a big gap between what I expected and what I actually experienced. If anything, the work environment turned out to be even better than I had imagined. What is the atmosphere like on site? It's bright and really enjoyable. Of course, we talk about work, but it's also easy to share fun ideas or little things that happen during the day. The team members are all close to each other and it's easy to get along with anyone, so you can work with peace of mind. How did you feel about writing a blog post? I'm really glad to have the chance to share my experiences like this. I hope that something from my daily work or thoughts can help someone out there, even just a little. Question from Takahashi to Lyu If you were to buy a car through KINTO, which car would you like to drive? I'd definitely go for the Crown. I've always thought it looked cool. Plus, I actually use this model a lot when creating test data at work, so I've kind of grown attached to it. lol The employee discount program also makes it possible to get a Crown at a really reasonable price, which is a big plus. On top of that, the range of customer-friendly services, like the comprehensive insurance plan, really makes the whole package feel impressive. MakiDon Self-introduction My name is MakiDon, and I joined the company in December. I belong to the Marketing Product Development Group in the Mobility Product Development Division. I mainly handle data analysis and machine learning tasks. My main role is to identify issues through data analysis, propose strategic solutions and exit plans, and support system design using machine learning. Before this, I worked as a project manager at a startup focused on architecture and IT. How is your team structured? I'm in the the Data Analysis and ML Utilization Team. We're a group of eight: one PjM (Project Manager)/PdM (Product Manager), one Scrum Master, and six engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Since KTC is part of a large corporation, my first impression was that it'd be a pretty traditional and stable company. But once I joined, I saw generative AI being used in Slack and AI actively integrated into various systems. It quickly became clear that the company is a tech company and has a fast-moving, startup-like energy—much more than I expected. What is the atmosphere like on site? It's a very open and supportive environment. Not only within the team but across departments, people are quick to offer help. You can ask for advice anytime, which makes it easy to work with peace of mind. How did you feel about writing a blog post? I Actually, I got to write a Tech blog before this main post. I'd never written a blog before, but thanks to the support and advice from my team, I was able to write it without any worries. It turned out to be a really valuable experience. I'll continue to do my best to share my new knowledge and experience both inside and outside the company! Question from Lyu to MakiDon What are you most proud of in your work so far? By bringing in-house the output we'd previously generated using machine learning tools, we managed to cut costs and boost click-through rates! Frank Neezen Self-introduction I’m Frank Neezen, a member of the Business Development Department, Officially titled Business Development Manager. My primary role, however, is as the Technical Architect, where I help guide the design and implementation of our core global full-service products. My background lies in consulting, where I’ve focused on advising clients on leveraging Salesforce to meet their technical and operational needs. How is your team structured? My direct team consists of 4 team members with a diverse skillset. We collaborate closely with our engineering team to develop software solutions for the global full-service lease business. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My transition from Salesforce in Amsterdam to KTC in Tokyo was remarkably smooth! I had some initial concerns about adapting to the cultural differences, but the exceptional onboarding process and the warm, supportive team at the Jimbocho office made all the difference. From day one, their welcoming attitude helped me settle in effortlessly. My main hurdle, however, was organizing all my personal affairs, for example sorting out banking or registering within the neighborhood without being able to speak Japanese. I had lots of help though from KTC with these activities. What is the atmosphere like on site? Our team is based together in the Jimbocho office, next to many of the engineers. The vibe is open and professional, but also relaxed. The atmosphere is open, professional but relaxed. There is a good team feeling, we all want to succeed with our work. How did you feel about writing a blog post? I have written articles in the past for other topics though mainly related to Salesforce. Always happy to write up and share my personal story of joining KTC! Question from MakiDon to Frank Was there anything that surprised you when you came to Japan? I'm amazed by how safe Japan is, walking around anywhere in Tokyo, the biggest city in the world feels completely secure! Also what truly surprising is that if you lose something, like a wallet or phone, it almost always finds its way back to you. I have had a few times when I did not even realize I lost something but then someone would randomly come up to me with my lost item. Such a refreshing experience! Finally Thank you everyone for sharing your thoughts on our company after joining it! There are more and more new members at KINTO Technologies every day! We'll be posting more new-joiner stories from across divisions, so stay tuned! And yes — we're still hiring! KINTO Technologies is looking for new teammates to join us across a variety of divisions and roles. For more details, check it out here !
アバター
まいど、おおきに( º∀º )/ 技術広報G イベントチームのゆかちです。 2025年7月、Osaka Tech Labにも念願のイベント会場が…! 今回はそんな Osaka Tech Lab JCT の行き方を簡単にですが紹介しちゃいます! ![accessosaka1](/assets/blog/authors/uka/accessosaka/jct.png =600x) Osaka Tech Lab JCT、40人ほど着席可能 住所:〒530-0001 大阪府大阪市北区梅田三丁目1番3号ノースゲートビルディング20階 JR大阪『中央口改札(1F)』、『連絡橋口(3F)』 2分 OsakaMetro 梅田駅 『北改札』5分 阪急電車 梅田駅『2階中央改札』7分 阪神電車 梅田駅(連絡橋)7分 場所はルクア1100の隣になります。 案内看板の『ルクアイーレ』や『オフィスタワー』が目指す先です! 阪急、阪神電車で来る方はまずは『JR大阪駅』方面へ! ![accessosaka1](/assets/blog/authors/uka/accessosaka/0.png =600x) 目印に! ![accessosaka1](/assets/blog/authors/uka/accessosaka/2.png =600x) JR大阪駅は『中央口改札』もしくは『連絡橋口改札』を出てオフィスタワー方面へ ![accessosaka1](/assets/blog/authors/uka/accessosaka/1.png =600x) Osaka Metroの場合は『北改札』を出てオフィスタワー方面へ ![accessosaka1](/assets/blog/authors/uka/accessosaka/3.png =600x) 1階から来る方はこちら ![accessosaka1](/assets/blog/authors/uka/accessosaka/4.png =600x) 3階から来る方はこちら ![accessosaka1](/assets/blog/authors/uka/accessosaka/5.png =600x) エスカレーターを登り4階が連絡通路になります、3階から乗るともっと短いヨ ![accessosaka1](/assets/blog/authors/uka/accessosaka/6.png =600x) 正面の自動ドアまでまっすぐ ![accessosaka1](/assets/blog/authors/uka/accessosaka/front.jpg =600x) 正面入口を入り右へ :::message ビルのセキュリティ上、時間帯によっては正面入口が施錠されている場合がございます。 その際は、正面入口に設置している「QRコード付きの案内ボード」からご連絡ください!スタッフが順次お迎えにまいります。 ::: ![accessosaka1](/assets/blog/authors/uka/accessosaka/7.png =600x) 手前のエレベーターにて20階までお越しください ![accessosaka1](/assets/blog/authors/uka/accessosaka/8.png =600x) エスカレーター降りてすぐがKINTOテクノロジーズです!Welcome! さいごに 以上、Osaka Tech Labでのお時間を楽しく過ごしていただけたら嬉しいです! 足を運んでいただきありがとうございました! またのお越しをお待ちしております(^_^)/
アバター