TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Introduction Hello! I am Yamada, and I develop and operate in-house tools in the Platform Engineering Team of KINTO Technologies' (KTC) Platform Group. If you want to know more about the CMDB developed by the Platform Engineering team, please check out the article below! https://blog.kinto-technologies.com/posts/2023-12-14-CMDB/ This time, I would like to talk about how we implemented a CMDB data search function and CSV output function in a chatbot, one of the CMDB functions, using generative AI and Text-to-SQL . The CMDB chatbot allows you to ask questions about how to use the CMDB or about the data managed in the CMDB. Questions about the CMDB data had been originally answered using a RAG mechanism using ChromaDB, but we moved to a Text-to-SQL implementation for the following reasons: Advantages of Text-to-SQL over RAG Data accuracy and real-time availability The latest data can be retrieved in real time directly from the CMDB database. No additional processing is required to update data. System simplification No infrastructure for vector DB or embedding processing is required (ChromaDB and additional batches for embedded data are no longer required). For these reasons, we decided that Text-to-SQL is more suitable for a system that handles structured data such as CMDB. What Is Text-to-SQL? Text-to-SQL is a technology for converting natural language queries into SQL queries. This allows even users without knowledge of SQL to easily extract the necessary information from the database. This makes it possible to retrieve data such as products, domains, teams, users, and vulnerability information including ECR and VMDR managed in the CMDB database from natural language queries. The following are some examples of matters that could be utilized within KTC: Retrieving a list of domains that have not been properly managed (domains not linked to products in the CMDB) Retrieving Atlassian IDs of all employees This is because the MSP (Managed Service Provider) team creates tickets for requests such as addressing PC vulnerabilities, by mentioning (tagging) the relevant individuals. Aggregation of the number of vulnerabilities detected in resources related to the products for which each group is responsible Extraction of products for which the AWS resource start/stop schedule has not been set. Previously, when a request to extract such data came to the Platform Engineering team, a person in charge would run a SQL query directly from the CMDB database to extract and process the data, then hand it over to the requester. When requesters become able to extract data using Text-to-SQL in the CMDB chatbot, they will be able to easily extract data without having to go through the trouble of asking a person in charge, as shown in the figure below: Text-to-SQL is a convenient feature, but you must be aware of the risk of insecure SQL generation. While the following figure illustrates an extreme case, since SQL is generated from natural language, there is a risk of unintentionally generating SQL statements that update or delete data or modify table structures. So, you need to avoid generating unsafe SQL by the following methods: Connecting to a Read Only DB endpoint Set DB users to Read Only permissions Carrying out a validation check to ensure that commands other than SELECT are not executed in application implementation System Configuration Here is the architecture of the CMDB. Resources that are not relevant to this article have been excluded. As I explained at the beginning, we had originally used ChromaDB as a vector DB, obtained information on how to use the CMDB from Confluence (implemented with LlamaIndex), and retrieved CMDB data from a database (implemented with Spring AI), then entered both into ChromaDB. This time, we have migrated answers to questions about CMDB data from the RAG feature in Spring AI + ChromaDB to a feature using Text-to-SQL. Text-to-SQL Implementation From here on, I would like to explain the implementation while showing you the actual code. CMDB Data Search Function Retrieving Schema Information First, retrieve the schema information required to generate SQL in LLM. The less schema information there is, the higher the accuracy, so we have adopted a method of specifying only the necessary tables. Since the comments for table columns are important as judgment criteria when the LLM generates SQL statements, all of them need to be added beforehand. def fetch_db_schema(): cmdb_tables=['table1', 'table2', ...] cmdb_tables_str = ', '.join([f"'{table}'" for table in cmdb_tables]) query = f""" SELECT t.TABLE_SCHEMA, t.TABLE_NAME, t.TABLE_COMMENT, c.COLUMN_NAME, c.DATA_TYPE, c.COLUMN_KEY, c.COLUMN_COMMENT FROM information_schema.COLUMNS c INNER JOIN information_schema.TABLES t ON c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME WHERE t.TABLE_SCHEMA = 'cmdb' AND t.TABLE_NAME IN ({cmdb_tables_str}) ORDER BY t.TABLE_SCHEMA, t.TABLE_NAME, c.COLUMN_NAME """ connection = get_db_connection() try: cursor = connection.cursor() cursor.execute(query) return cursor.fetchall() finally: cursor.close() connection.close() Example of retrieved results TABLE_SCHEMA TABLE_NAME TABLE_COMMENT COLUMN_NAME DATA_TYPE COLUMN_KEY COLUMN_COMMENT cmdb product Product table product_id bigint PRI Product ID cmdb product Product table product_name varchar Product name cmdb product Product table group_id varchar Product's responsible department (group) ID cmdb product Product table delete_flag bit Logical deletion flag 1=deleted, 0=not deleted Formatting the retrieved schema information into text for the prompt to be passed to the LLM def format_schema(schema_data): schema_str = '' for row in schema_data: schema_str += f"Schema: {row[0]}, Table Name: {row[1]}, Table Comment: {row[2]}, Column Name: {row[3]}, Data Type: {row[4]}, Primary Key: {'yes' if row[5] == 'PRI' else 'no'}, Column Comment: {row[6]}\n" return schema_str Convert each column into the following text and pass the schema information to LLM. Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: product_id, Data Type: bigint, Primary Key: PRI, Column Comment: プロダクトID Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: product_name, Data Type: varchar, Primary Key: no, Column Comment: プロダクト名 Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: group_id, Data Type: varchar, Primary Key: no, Column Comment: プロダクトの担当部署(グループ)ID Schema: cmdb, Table Name: product, Table Comment: プロダクトテーブル, Column Name: delete_flag, Data Type: bit, Primary Key: no, Column Comment: 論理削除フラグ 1=削除, 0=未削除 Generating SQL queries from questions and schema information from the CMDB chatbot, using LLM This is the Text-to-SQL portion, where SQL queries are generated from natural language. Based on the questions and schema information, we specify various conditions in the prompt and have LLM generate SQL. For example, the following conditions can be specified: Generate valid queries for MySQL:8.0 Use fuzzy search for condition expressions other than ID Basically, exclude logically deleted data from search Do not generate anything other than SQL statements Addition of context information Convert questions in the forms of "... of KTC" and "... of CMDB" into "All...” Convert questions about region to those about AWS region Convert Tokyo region to ap-northeast-1 The instruction "Do not generate anything other than SQL statements" is particularly important. When this was not conveyed properly, responses often ended up including unnecessary text such as: "Based on the provided information, the following SQL has been generated: SELECT~” So, a prompt is needed that ensures SQL statements only in the form of "SELECT~" are generated without generating unnecessary text, explanations, or markdown formatting. def generate_sql(schema_str, query): prompt = f""" Generate a SQL query based on the given MySQL database schema, system contexts, and question. Follow these rules strictly: 1. Use MySQL 8.0 syntax. 2. Use `schema_name.table_name` format for all table references. 3. For WHERE clauses: - Primarily use name fields for conditions, not ID fields - Primarily use name fields for conditions, not ID fields - Use LIKE '%value%' for non-ID fields (fuzzy search) - Use exact matching for ID fields - Use exact matching for ID fields - Include "delete_flag = 0" for normal searches - Use "delete_flag = 1" only when the question specifically asks for "deleted" items CRITICAL INSTRUCTIONS: - Output MUST contain ONLY valid SQL query. - DO NOT include any explanations, comments, or additional text. - DO NOT use markdown formatting. - DO NOT generate invalid SQL query. - DO NOT generate invalid SQL query. Process: 1. Carefully review and understand the schema. 2. Generate the SQL query using ONLY existing tables and columns. 3. Double-check query against schema for validity. System Contexts: - Company: KINTO Technologies Corporation (KTC) - System: Configuration Management Database (CMDB) - Regions: AWS Regions (e.g., Tokyo region = ap-northeast-1) Interpretation Rules: - "KTC" or "CMDB" in query:Refer to all information in the database Examples: " Employees in KTC" -> "All users" "KTC's products" -> "All products" "Domains on CMDB" -> "All domains" - Region mentions:Interpret as AWS Regions Example: " ECR repositories in Tokyo region" -> "ECR repositories in ap-northeast-1" Database Schema: {schema_str} Question: {query} """ return llm.complete(prompt).text.strip() Perform validation checks on SQL generated by LLM and Text-to-SQL to allow only SELECT statements To prevent the risk of unsafe SQL generation, we connect to a read-only DB endpoint, but check whether any SQL other than queries has been generated. Execute the SQL query generated by LLM Generate an answer in LLM based on the SQL query generated by LLM, the results of SQL execution, and the question. Pass the last executed SQL query, the results of SQL execution, and the question to LLM to generate an answer. Unlike the Text-to-SQL prompt, which includes many instructions, this prompt includes fewer instructions but still specifies not to include the DB schema configuration or physical names in the answer. def generate_answer(executed_sql, sql_result, query): prompt = f""" Generate an answer based on the provided executed SQL, its result, and the question. Ensure the answer does not include information about the database schema or the column names. Respond in the same language as the question. Executed SQL: {executed_sql} SQL Result: {sql_result} Question: {query} """ return llm.stream_complete(prompt) Execution Result Question: Tell me the product of the platform group. Based on this question and the database schema, LLM will generate SQL as follows: Execution Result Question: Tell me the product of the platform group. Based on this question and the database schema, LLM will generate SQL as follows: SELECT product_name FROM product WHERE group_name LIKE '%プラットフォーム%' AND delete_flag = 0; This information and the results of the SQL execution are then passed to the LLM to generate an answer. This is the vulnerability information retrieved from the ECR scan results. Generating a JSON object containing an SQL query using LLM based on the output request and schema information from the CMDB chatbot Based on the natural language describing the CMDB data to be output as CSV, we will use LLM to generate a JSON object containing the column names to be output and the SQL statement to search for them. The conditions are basically the same as those for the CMDB data search function prompt, but they emphasize the instructions for generating a JSON object according to the template. Here is the prompt: prompt = f""" Generate a SQL query and column names based on the given MySQL database schema, system contexts and question. Follow these rules strictly: 1. Use MySQL 8.0 syntax. 2. Use `schema_name.table_name` format for all table references. 3. For WHERE clauses: - Primarily use name fields for conditions, not ID fields - Use LIKE '%value%' for non-ID fields (fuzzy search) - Use exact matching for ID fields - Include "delete_flag = 0" for normal searches - Use "delete_flag = 1" only when the question specifically asks for "deleted" items Process: 1. Carefully review and understand the schema. 2. Generate the SQL query using ONLY existing tables and columns. 3. Extract the column names from the query. 4. Double-check query against schema for validity. System Contexts: - Company: KINTO Technologies Corporation (KTC) - System: Configuration Management Database (CMDB) - Regions: AWS Regions (e.g., Tokyo region = ap-northeast-1) Interpretation Rules: - "KTC" or "CMDB" in query: Refer to all information in the database Examples: "Employees in KTC" -> "All users" "KTC's products" -> "All products" "Domains on CMDB" -> "All domains" - Region mentions: Interpret as AWS Regions Example: "ECR repositories in Tokyo region" -> "ECR repositories in ap-northeast-1" Output Format: Respond ONLY with a JSON object containing the SQL query and column names: {{ "sql_query": "SELECT t.column1, t.column2, t.column3 FROM schema_name.table_name t WHERE condition;", "column_names": ["column1", "column2", "column3"] }} CRITICAL INSTRUCTIONS: - Output MUST contain ONLY the JSON object specified above. - DO NOT include any explanations, comments, or additional text. - DO NOT use markdown formatting. Ensure: - "sql_query" contains only valid SQL syntax. - "column_names" array exactly matches the columns in the SQL query. Database Schema: {schema_str} Question: {query} """ Performing validation checks on SQL generated by LLM and Text-to-SQL to allow only SELECT statements . Execute SQL queries generated by LLM This is the same as with the CMDB data search function. Outputting a CSV file using the execution results Use the SQL results and column names generated by LLM to output a CSV file. column_names = response_json["column_names"] # LLMで生成したJSONオブジェクトからカラム名を取得 sql_result = execute_sql(response_json["sql_query"]) # LLMで生成したSQLの実行結果 csv_file_name = "output.csv" with open(csv_file_name, mode="w", newline="", encoding="utf-8-sig") as file: writer = csv.writer(file) writer.writerow(column_names) writer.writerows(sql_result) return FileResponse( csv_file_name, media_type="text/csv", headers={"Content-Disposition": 'attachment; filename="output.csv"'} ) Execution Result By specifying the content and columns you want to output and posting it in the chat, you can now output a CSV file as shown below. First, LLM creates a JSON object like the one below from the chat messages and the database schema. { "sql_query": "SELECT service_name, group_name, repo_name, region, critical, high, total FROM ecr_scan_report WHERE delete_flag = 0;", "column_names": ["プロダクト名", "部署名", "リポジトリ名", "リージョン名", "critical", "high", "total"] } The following is the process of executing SQL based on the above information and outputting a CSV file: Product name Division name Repository name Region name critical high total CMDB Platform ××××× ap-northeast-1 1 2 3 CMDB Platform ××××× ap-northeast-1 1 1 2 CMDB Platform ××××× ap-northeast-1 1 1 2 Next Steps So far, we have utilized generative AI and Text-to-SQL to implement a CMDB data search function and a CSV data output function. However, there is still room for improvement, as outlined below: The CMDB data search function calls LLM twice, which makes it slow. Weak at answering complex and ambiguous questions Natural language is inherently ambiguous, allowing multiple interpretations of a question. Accurate understanding of schema Schema information is complex, and it is difficult to make the system understand the column relationship between the tables. Addition of context information Currently, the first prompt adds minimal context information. In anticipation of the future, when more context information will be added, we are considering methods to transform the question content from a large amount of context information into an appropriate question before the first LLM call. We are also exploring the possibility of fine-tuning with a dataset that includes KTC-specific context information for additional training. Implementing query routing Since the APIs called from the front end are divided into two—one for CMDB data search and one for CSV output—we want to unify them into a single API and improve it so that it can determine which operation to call based on the content of the question. Conclusion This time, I discussed the CMDB data search function and CSV output function using generative AI and Text-to-SQL. It's difficult to keep up with new generative AI-related technologies as they continue to emerge every day. But as AI will be more involved in application development than ever before in the future, I would like to actively utilize any technologies that interest me or that seem applicable to our company's products.
アバター
Self-Introduction Hi, I'm Tetsu. I joined KTC in March 2025. I worked as an infrastructure engineer handling both on-premises and cloud environments. At KTC, I've joined the team as a platform engineer. I'm a big fan of travel and nature, so I usually head out somewhere far during long holidays. Overview In this article, I’ll walk you through how to update your GitHub Actions workflow to pull public container images—such as JDK, Go, or nginx, from the ECR Public Gallery instead of Docker Hub. Starting April 1, 2025, Docker Hub will tighten the rules on pulling public container images for unauthenticated users. More specifically, unauthenticated users will be limited to 10 image pulls per hour per source IP address. Learn more here . The virtual machines that run GitHub Actions workflows are shared across all users, which means Docker Hub sees only a limited set of source IP addresses. Because of this, the above limits became a bottleneck when building containers with GitHub Actions, so we'll need to find a workaround. Prerequisites At our company, we used GitHub Actions with the following configuration to automate container builds (this is a roughly abstracted configuration). Considering Countermeasures We explored a few ways to deal with the Docker Hub pull limit. Using a Personal Access Token (PAT) to Log In to Docker Hub and Pull You might be thinking, "Why not just authenticate with Docker Hub in the first place?" Fair point. You can generate a Docker Hub PAT and use it in your GitHub Actions workflow with docker login to authenticate. That way, you can get around the pull limit. Just keep in mind, PATs are tied to individual users. Since our team shares GitHub Actions workflows, linking tokens to individual users isn’t ideal from a license management standpoint. Log in to Docker Hub with your Organization Access Token (OAT) and pull It's basically the same method as above, but the key difference is that you're authenticating with a shared token tied to your OAT. To use this shared token, you'll need a Docker Desktop license for either the Team or Business plan. Migrating to GitHub Container Registry (GHCR) Here, I'll cover how to pull container images from GitHub Container Registry (GHCR), which is provided by GitHub. By using {{ secrets.GITHUB_TOKEN }} in your GitHub Actions workflow, you can authenticate and pull container images. That said, searching for images can be a bit tricky, especially if you're trying to compare versions with what's available on Docker Hub. Transition to ECR Public Gallery Here's how you can pull container images from the ECR Public Gallery provided by AWS. Restrictions differ depending on whether you use IAM to authenticate with ECR Public Gallery, but it's basically free to use. For unauthenticated users, the following limits apply per source IP address when using the ECR Public Gallery: 1 pull per second 500GB of pulls per month On the other hand, authenticated users are subject to the following restrictions on an account-by-account basis. 10 pulls per second Transfers over 5TB/month are charged at $0.09 per GB (the first 5TB is free) You can find more details in the official documentation below. https://docs.aws.amazon.com/ja_jp/AmazonECR/latest/public/public-service-quotas.html https://aws.amazon.com/jp/ecr/pricing/ If you are not using an AWS account, data transferred from a public repository is restricted based on the source IP. The ECR Public Gallery includes official Docker images, which are equivalent to those on Docker Hub. That makes it easier to use in practice and simplifies the migration process. Case Comparison I reviewed the proposals above and evaluated them based on QCD. Here's the comparison table: Proposals Quality Cost Delivery Log in to Docker Hub using PAT × - Relies on personal tokens, which isn't ideal for organizations - No change in convenience from the current setup 〇 No additional cost 〇 Easy to implement with less workload Log in to Docker Hub using OAT ○ No change from the current setup × License costs increase depending on the number of users × License changes take time to process Transition to GHCR △ Hard to find equivalent images currently used on Docker Hub 〇 No additional cost 〇 Easy to implement with less workload Transition to ECR Public Gallery 〇 Easy to find matching currently used on Docker Hub 〇 No additional cost 〇 Easy to implement with less workload One advantage of using PAT or OAT is that it keeps things as convenient as they are now. GHCR can be easily set up using GitHub's {{ secrets.GITHUB_TOKEN }}, but it's harder to search for container images compared to ECR Public Gallery. ECR Public Gallery requires some IAM policy changes, but since they're minor, the extra workload is minimal. Based on these points, we decided to go with the plan of "migrating to ECR Public Gallery," as it's low-workload, cost-free, and offers good usability. Note: Depending on your environment or organization, this option may not always be the best fit. Settings for Migrating to ECR Public Gallery To migrate, you'll need to update the container image source, set up the YAML file for the GitHub Actions workflow, and configure AWS accordingly. Diagram Fixing the Container Image Source Searching for Container Images In most cases, you probably define where to pull container images from in files like your Dockerfile or docker-compose.yml. This time, we'll walk through how to migrate the source of a JDK container image from Docker Hub to ECR Public Gallery using a Dockerfile. Let's say your Dockerfile includes a FROM clause like this: FROM eclipse-temurin:17.0.12_7-jdk-alpine Search here to check if the image is available on ECR Public Gallery. In this case, search for the official Docker Hub image like eclipse-temurin before the ":" and pick the one labeled "by Docker." Select "image tags" to display the image list. Type the tag of the official Docker Hub image (in this case, 17.0.12_7-jdk-alpine ) into the image tags search field to find the image you're looking for. Then copy the "Image URI". Fixing the Container Image Source Paste the modified container image URI into the FROM line. In this case, the updated URI looks like the example below (note the addition of public.ecr.aws/docker/library/ compared to the original). FROM public.ecr.aws/docker/library/eclipse-temurin:17.0.12_7-jdk-alpine With this change, your setup will now pull images from ECR Public Gallery. AWS Configuration To pull from ECR Public Gallery while authenticated, you'll need to set up an IAM role and policy. IAM Role You can follow the steps in GitHub's official documentation for this. https://docs.github.com/ja/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services Start by setting up the identity provider, then create the IAM role. IAM Policy Create an IAM policy that allows the action to pull from the ECR Public Gallery. I referred to the following docs for this: https://docs.aws.amazon.com/ja_jp/AmazonECR/latest/public/docker-pull-ecr-image.html { "Version": "2012-10-17", "Statement": [ { "Sid": "GetAuthorizationToken", "Effect": "Allow", "Action": [ "ecr-public:GetAuthorizationToken", "sts:GetServiceBearerToken" ], "Resource": "*" } ] } Attach this IAM policy to the IAM role you created above. Added Login Process to ECR Public Gallery in Github Actions To log in to the ECR Public Gallery with authentication, add a login process to the YAML file that defines the Github Actions workflow. In our setup, we add the following before the Docker Build step. ## ECR Public Galleryへログイン - name: Login to ECR Public Gallery id: login-ecr-public run: | aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws *Since the ECR Public Gallery is located in the us-east-1 region, make sure to explicitly set --region us-east-1 . Conclusion In this article, we walked through how to set up your GitHub Actions workflow to pull public container images (like JDK, Go, nginx, etc.) from the ECR Public Gallery instead of Docker Hub. Hope this helps with your development and daily tasks!
アバター
Introduction Hello, this is Hirata from the Analysis Production Group! As an analyst, I’d like to talk about how to streamline the SQL creation tasks I handle every day. In this article, I will talk about how I used Github Copilot Agent and "Python" to streamline the task of writing complex SQL consisting of hundreds of lines, its trial-and-error process and results, and future improvements. 【Summary】 ✔︎ Preparing table information in advance and having the generative AI create SQL ✔︎ Implementing a system to automatically execute and check the created SQL using Python ✔︎ Having the AI automatically fix errors upon their occurrence to improve work efficiency Background: Daily SQL Creation Tasks and Their Challenges I face the following problems daily: Complicated interactions with the generative AI It was necessary to repeatedly explain table information, data types, date formats, and so on to the generative AI each time, which was a time-consuming task. Creation of massive SQL I have to write hundreds of lines of SQL for tasks such as extracting users for marketing purposes or creating data for analysis, with complex processing logic scattered throughout. Repeated trial-and-error (Loop) The repetitive cycle of copying and executing the generated SQL, and when an error is encountered, I forward the error log to request a correction…this has become a bottleneck. If I fix myself, differences from the latest version created by GitHub Copilot arise, and when I request the next fix, it sometimes reverts to a previous state. Trial and Error! Building an Automated Workflow Using Generative AI and Python I sought to enhance work efficiency by adopting the following process. Overview of the Automation Flow Registration of preliminary information I compile the structure of each table, data types, sample data, sample SQL, and processing tips into respective prompt files. SQL generation using generative AI I give the generative AI a prompt describing the full flow: 'Generate SQL based on table info, save it, and verify execution,' and it automatically produces the SQL file. Execution and checking with python I execute the generated SQL using a Python script. If an error occurs, I feed back the error log and prompt automatic correction. Key Points of the Approach Below are the directory structure and example files I actually built: rules / conversation_rules.prompt.md Basic conversation rules Generate and save SQL based on table information, then execute the Python file to check whether the SQL is correct. Also, describe the rules for SQL creation and the preferred conversation style. tables / .prompt.md Enter table information. By including sample data, have the generative AI judge the characteristics of the data. ## テーブル名 users ## 説明 ユーザー情報のテーブル。 user_idをキーにorderテーブルと紐付け可能 ## sample SQL ```sql select user_id as "顧客ID", name as "顧客名", address as "住所" from users ``` ## sample data | 顧客ID | 顧客名 | 住所 | | ------ | -------- | -------------------------------- | | 1 | 山田太郎 | 東京都新宿区西新宿 | | 2 | 佐藤花子 | 大阪府大阪市中央区難波 | | 3 | 鈴木次郎 | 愛知県名古屋市中区栄 | sql / Have the generated SQL saved as a file. utils / query_checker.py After creating the SQL, execute this Python file to check whether the SQL is correct. Example of Actual GitHub Copilot Agent Response Here’s a simple example to check its actual operation. [in] Write an SQL query to count how many emails were sent this month, grouped by user. [out] First, let’s check the schema information, then create an SQL query to analyze email delivery. We’ll look for table information related to email delivery. "› Searched for files matching /prompts/tables/**, 14 matches" We’ve located schema information related to email delivery history! Let’s review the details and write an SQL query to extract the number of deliveries made this month. Read メール配信履歴.prompt.md Schema information has been confirmed. It appears the data is stored in a table named [メール配信履歴テーブル] (Email Delivery History Table). We’ll now create an SQL query to aggregate the number of email deliveries for this month (April 2025). Key columns in this table include: • email_id: メルマガID • delivery_timestamp: 配信日時 • delivery_status: 配信ステータス Now, let’s create the SQL query to aggregate the number of email deliveries for this month and save it to a file named メール配信集計_今月.sql. This SQL query will: ・ Filter emails delivered between April 1 and April 30, 2025 ・ Include only rows where delivery the status is "Delivered" ・ Calculate the number of deliveries, successes, failures, and total deliveries per user ・ Display overall totals at the end Would you like to validate the query using Python? Let’s run the SQL we just created for verification: python3 query_checker.py メール配信集計_今月.sql SQL validation complete! The query was successfully validated and is executable. The test run showed no syntax errors and confirmed a correct execution plan. Results and Future Prospects Fortunately, the entire process operated smoothly, marking the first step toward automating SQL creation! That said, the process is not perfect. What challenges and improvements should I work on? ● Current Challenges Review workload I tend to miss errors more easily when reviewing AI-generated SQL compared to writing SQL manually. I am still in the trial-and-error stage of developing a verification mechanism and methods that would enable even non-SQL users to effectively review queries, and I hope to improve this going forward. I also hope that advances in generative AI will help address these challenges! Checking whether data has been extracted as intended There are cases where the requirement definitions are incomplete or I fail to verbalize the information in my mind accurately, making it difficult to automatically determine whether the processing aligns with my intent. There is still room for improvement in conveying subtle nuances and intentions. ● The Next Challenges Automation of number check As a first step in sophisticating reviews, I would like to implement a function to check whether the number of extracted items is as I intended. Accumulation of data processing methods that can be called the "secret sauce" I want to keep adding to the prompts effective data processing techniques that become more obvious as I use them more. Expansion to analysis automation Ultimately, I aim to create a system that can automate, to some extent, the workflow from SQL creation to analysis of extracted data!
アバター
はじめに こんにちは! KINTOテクノロジーズのプロジェクト推進グループでWebエンジニアをしている亀山です。 フロントエンドを勉強中です。 モダンなWeb開発においては コンポーネント指向 が主流となっています。UIを再利用可能な部品に分割することで、開発効率や保守性が向上します。 Web Components と Tailwind CSS は、どちらもコンポーネント指向のフロントエンド開発を支援する強力なツールです。 Web Componentsは、標準仕様に基づいてカプセル化された再利用可能なカスタム要素を作成できる近年注目を集めている技術です。一方、Tailwind CSSは、ユーティリティファーストのアプローチで高速なUIスタイリングを実現するCSSフレームワークです。最近だとTailwind CSSもパフォーマンスが向上したv4が登場しておりアップデートも活発です。 一見すると、これらの技術は相性が良いように思えるかもしれません。「コンポーネントごとにカプセル化されたマークアップとロジック(Web Components)に、ユーティリティクラスで手軽にスタイルを当てる(Tailwind CSS)」という組み合わせは魅力的、、、だと思っていました。 いざ開発を始めると、どうしてもうまくいかない。調べていくとWeb Componentsの根幹をなす Shadow DOM と、Tailwind CSSのスタイリングメカニズムには、お互いの根本的な思想が衝突していることがわかりました。本記事では、特にShadow DOMの観点から、両者がなぜ相性が悪いのか、そしてなぜ併用するべきではないのか私が勉強したことをまとめていきます。 Web Componentsについて Shadow DOMとは何ぞや まずWeb Componentsは主に以下の3つの技術から構成されてます。 Custom Elements: 独自のHTML要素(例: <my-button> )を定義するAPI Shadow DOM: コンポーネント内部のDOMツリーとスタイルを、外部から隔離(カプセル化)する技術 HTML Templates: 再利用可能なマークアップの断片を保持するための <template> 要素と <slot> 要素 この中でも、Tailwind CSSとの相性の悪さに直結するのが Shadow DOM の存在です。 Shadow DOMを要素にアタッチすると、その要素は Shadow Host となり、内部に隠されたDOMツリー( Shadow Tree )を持ちます。Shadow Tree内の要素に対するスタイルは、原則としてShadow Treeの外部(メインドキュメントや親のShadow Tree)からは影響を受けません。逆に、Shadow Treeの外部で定義されたスタイルも、原則としてShadow Tree内部には適用されません。 これは、コンポーネントのスタイルが外部のCSSルールに汚染されたり、コンポーネント内部のスタイルが外部に漏れ出たりするのを防ぐ、強力な スタイルのカプセル化機能 を提供します。これにより、異なるCSS設計手法が混在する環境でも、コンポーネントの見た目が予期せず崩れるといった心配がなくなります。 下記に併用した場合のソースコードの例を記載します。 class MyStyledComponent extends HTMLElement { constructor() { super(); // Shadow DOMをアタッチ(openモードで外部からアクセス可能に) const shadowRoot = this.attachShadow({ mode: 'open' }); // Shadow DOM内部のHTML構造 const template = document.createElement('template'); template.innerHTML = ` <div class="container mx-auto p-4 bg-blue-200"> // Tailwindクラスを使用 <p class="text-xl font-bold text-gray-800">Hello from Shadow DOM!</p> // Tailwindクラスを使用 <button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Click me </button> </div> `; shadowRoot.appendChild(template.content.cloneNode(true)); // ★ ここが問題 ★ Shadow DOM内部にスタイルを適用するには...? // 外部のスタイルシートは原則届かない } } customElements.define('my-styled-component', MyStyledComponent); 上記の例のように、 MyStyledComponent の Shadow DOM 内で <div class="container mx-auto p-4 bg-blue-200"> のようなTailwindクラスを使用しても、デフォルトではこれらのスタイルは適用されません。 Tailwind CSSについて ユーティリティファーストとグローバルCSS Tailwind CSSは、 flex , pt-4 , text-blue-500 といった低レベルなユーティリティクラスをHTMLに直接記述することで、高速にUIを構築するアプローチを採用しています。 ビルドプロセスにおいて、TailwindはプロジェクトのHTML、JavaScript、TypeScriptファイルなどをスキャンし、使用されているユーティリティクラスに対応するCSSルールを生成します。生成されたCSSは、通常、 グローバルな単一のスタイルシート として出力され、HTMLドキュメントの <head> などに読み込まれます。 例えば、HTMLに <div class="flex pt-4"> があれば、Tailwindは以下のようなCSSルールを生成し、グローバルスタイルシートに含めます。 /* Tailwindによって生成されるCSS(の例) */ .flex { display: flex; } .pt-4 { padding-top: 1rem; } このTailwindのスタイリングメカニズムにおける重要な点は、CSSルールが グローバルスコープ で定義されるという点です。 2つの絶望的な相性の悪さ Shadow DOMのカプセル化 vs. Tailwindのグローバルスタイル ここで問題の核心部分です。 Shadow DOM は、内部の要素に外部のスタイルが適用されないように カプセル化 する Tailwind CSS は、使用されているユーティリティクラスに対応するCSSルールを グローバルスコープに生成 する この二つは根本的に矛盾します。Tailwindがグローバルに生成した .flex { display: flex; } のようなCSSルールは、Shadow DOMの境界を越えてShadow Tree内の要素に到達しないのです。 先ほどのTypeScriptの例で、 <div class="container mx-auto p-4 bg-blue-200"> にTailwindのスタイルが当たらないのは、これらのクラスに対応するCSSルールがShadow DOMの外部(メインドキュメントのグローバルスコープ)に存在し、Shadow DOMがそのルールの適用をブロックしているからです。 Tailwind CSS v4について補足: Tailwind CSS v4では、新しいエンジンによるパフォーマンス向上などが謳われていますが、基本的なスタイリングのメカニズム(プロジェクトファイルをスキャンしてユーティリティクラスに対応するCSSをグローバルに生成する)という点では変わりません。したがって、v4を使用してもShadow DOMとの相性の悪さは解消されません。 どうにかできんのか?(解決策はあるのか?) この問題を解決するために、色々調べていると、この衝突の回避策はあるにはあるが、どれもWeb ComponentsやTailwind CSSのメリットを損なう、あるいは実装コストが非常に高いものになり、根本的な解決策は見つかりませんでした。苦し紛れなものですが回避策をいくつか紹介します。 ビルドしたTailwindのCSSをShadow DOM内にコピー&ペーストする 各Web ComponentのShadow DOM内に、そのコンポーネントで使用しているTailwindクラスに対応するCSSルールを手動、あるいはビルドツールで抽出して <style> タグとして埋め込む方法です。 デメリット: 非常に手間がかかり、メンテナンス性が低い コンポーネントごとに重複したCSSを持つことになり、ファイルサイズが増大する TailwindのJITコンパイル(使っているクラスだけを生成する)のメリットが活かせない Tailwindの運用ワークフロー(設定ファイル、プラグインなど)と乖離する Shadow DOMを使用しない Web ComponentsでShadow DOMを使わず、Light DOMに要素を配置する方法です。この場合、要素はメインドキュメントのDOMツリーの一部と見なされるため、グローバルなTailwindスタイルが適用されます。 デメリット: Web Componentsの最大のメリットである「スタイルのカプセル化」が失われ、外部のCSSがコンポーネントに影響を与えたり、コンポーネントのスタイルが外部に漏れ出たりする可能性が生じてコンポーネントの独立性が損なわれる これらのアプローチを見てもわかるように、Shadow DOMによる強力なカプセル化と、グローバルスタイルシートに依存するTailwind CSSは、根本的に思想が異なるため、無理に併用しようとするとどちらかの技術のメリットを大きく損なうことになります。 結論:Web ComponentsとTailwind CSSは併用するべきではない これまで見てきたように、Web Components(特にShadow DOMを利用する場合)とTailwind CSSの併用は、両者のメリットを打ち消し合ってしまうため、基本的には避けるべきです。 その理由は、2つの技術が持つスタイリングの 根本的な思想・仕組みが衝突 するからです。 Web Components (Shadow DOM) は、コンポーネントのスタイルを外部から完全に**隔離(カプセル化)**することを目的としている 一方、 Tailwind CSS は、ユーティリティクラスに対応するCSSを グローバルなスタイルシート として生成し、ページ全体に適用することを前提としている このため、Tailwindが生成した便利なユーティリティクラスのスタイルは、Shadow DOMの強固な壁を越えることができず、コンポーネント内部には適用されません。 回避策は存在するものの、いずれもコンポーネントの独立性を犠牲にしたり、開発の複雑さを増大させたりと、本末転倒な結果を招きがちです。それぞれの技術の長所を最大限に活かすためには、併用しないという選択が賢明と言えるでしょう。 今回の記事が、Web ComponentsとTailwind CSSの併用を検討されている方の参考になれば幸いです。
アバター
はじめに KINTO Unlimited appのクリエイティブを担当している中村屋です。この度、アプリにサウンドを実装することになったので、そのプロセスや考え方などをお話ししていこうと思います。 KINTO Unlimitedプロジェクトのビジネス担当の方から、アプリを継続利用するユーザーを増やすべく、「つい気持ちよくなって続けてしまうような音」を入れたりできないですかねと軽い相談(クリエイティブに完全にお任せ!)がありました。 サウンドを搭載したアプリは様々にありますが、なんかいい感じって思うアプリってサウンドも洒落てますよね?ここぞとばかりに「 ほほう、あのサウンドデザイナー/アーティストに依頼して、、 」と思ったのも束の間、オリジナルで作る予算はないよと。。大きく思い描いたのに残念。。 とはいえ、クオリティは譲れなかったので、品質が高いと言われるサウンドサービスを色々調べた結果、Spliceという有料サービス( https://splice.com/sounds )を利用することにしました。 メインの業務の傍らの案件かつ、経験があまりない領域ではありましたが、 膨大にあるサウンドからどうやって選んで組み立てるのか? サウンド実装までのデザインプロセスを知りたい クリエイティブが開発にどう関わってんの? と思う方はぜひ見ていってください。 Unlimitedのサウンド世界観は? まずはディレクションです。 ここが後工程に大きく響いてくるキモとなる部分です。アプリのサウンド世界観を言語化し説得力のある進め方にすること、サウンドの検討に判断軸を持ち、膨大な時間をかけないようにするためにも必要です。 スコープを定義:実験的な実装という位置付けのため、最小単位での特定の一連の体験に実装します。ユーザー操作のフィードバックSE(Sound Effect)、BGMをターゲットとします。 Unlimitedのサービスは、購入後のクルマが、技術とともにアップグレードしていく新しいクルマの持ち方であり、そのキーワードは 未来的・革新的・最適化・スマート・安心感 です。その「らしさ」をサウンドに込めます。 ここから連想するサウンドは、「 環境に溶け込みながらもモダンで心地よいデジタルサウンド 」( 仮説 )でした。アイデアや表現の幅を狭めないよう、遊びが効くレベルで仮説コンセプトを立てていきます。心が落ち着く要素もありながら、クールでハリのある感じ、、とイメージを膨らませながらサウンドを検索していきます。 そしてすぐに、これではNGだと気づきます。音のプロでない者が感覚で選んだサウンド群が、調和の取れた一貫性のあるものになるわけがないと。しかし、、ありました!質を担保し、効率的な方法が。 Spliceには サウンドパック が提供されていて、ゲームやUIなどのアプリ向けのパックがあったのです。そこで、モダンでSFの要素がありつつ心地よいテーマを持つサウンドパックを選び、サウンド候補を選んでいきます。そして、Adobe Premiere Proを用いてアプリの操作動画にSEを当てこみ、さらに候補を絞り込みます。 :::message Tips:制作サウンドデザイナー名がクレジットに入っているサウンドパックが特に優れています。コンセプトが一貫して明確、音質・音量が安定(ノーマライズ)していて、余計な調整なく実装しやすいと感じました。 ::: 方向転換 完成度を高めず、早めにプロジェクトメンバーにバリエーションを聞いてもらって、方向性の意見をもらいます。基本はいいねという意見でしたが、「いいんだけど、もうちょっと俗的な感じの方がいいのかな?」という意見に目が止まりました。 アプリ・サービスに深く関わっているメンバーからの意見です。この感覚をクリエイティブが拾い上げ、言葉にできない違和感を解釈する必要があると思いました。 俗的=一般的、洗練されていない、ありきたりということですが、言葉通りに受け取らず、デザイン的に思考します。洗練された未来的なサウンドは適していない→ユーザーに提供する価値はそこではない→ ビジョンよりもリアルなユーザー に寄り添い、共感を呼ぶべきであると解釈します。 冒頭の「アプリ継続利用促進のため」にもあるように、初心者向けコンテンツやゲーミフィケーションをベースにした施策などをアプリでは行ってきており、一方的な価値提供ではなく、リアルなユーザーにフォーカスした利用促進を行っています。 当初考えたコンセプトは間違いではないが、アプリコンセプトが緩やかに変化しており、それに伴ったアップデートが必要ということがわかりました。そこで、サウンドコンセプトを「 最新の技術を親しみやすく、共に成長していく安心感のある体験 」 の提供 と再構築しました。 このコンセプトで再度サウンドを練り直したサンプルの一部がこちら。 https://www.youtube.com/watch?v=oeGNNqRJs50 自分の記憶にあるような親しみのあるサウンド、遊び心が効いてクセになりそうなイメージになったのではないでしょうか。 実装する前に 決まったサウンドデータをエンジニアさんに渡してあとはよろしく!では終わりません。ユーザー体験を形作る上で、ここからの設計フェーズも非常に重要です。 例えば、アニメーションの見せどころの視覚変化にサウンドが密接に同期するととても気持ちいいですよね(例:コインがキラッと光った瞬間に音が鳴る)。逆に、ここにズレがあると違和感が生まれ、ストレスを与えます。 また、ボタンを押した際に鳴るSEを考えた時に、押した瞬間0.00秒ジャストに鳴ると硬い印象になり、数十ミリ秒のわずかな遅延再生させるとより自然で洗練された印象になります。※テーマによって考え方が変わります。 このような考え方を取り入れて、どこで・いつ・どのように再生されるのかを、再現性を担保できるように仕様書にまとめます。(まずは、フィジビリを考えすぎずユーザー体験の理想として落とし込んでいきます)特にサウンドの専門アプリではないので、専門的な概念まで踏み込まず、以下のように実装仕様書をまとめます。 管理ID/サウンドファイル名/対象画面 再生トリガー:「〇〇ボタンタップ時」「△△アニメーション表示時」など、どのようなユーザー操作やイベントで音が鳴るかを明記。 ループ再生の有無 音量:BGMやキャンセル音などは抑えめになど、サウンドの意味や関係性を元に設計。 遅延再生:この項目はトリガーを起点として再生のタイミングを調整できるので、トリガー内容が複雑になるのを防ぎます。 フェードイン:音の始まりの調整、SEとBGMの競合回避に役立てることもできます。 フェードアウト:BGMが突然途切れるのではなく、余韻を残して停止させると丁寧な印象です。 備考:再生タイミングの意図など、疑問が生まれないように記載していきます。 そして、データについてです。アプリがインストールされるデバイスはユーザーのもの、つまりユーザーデバイスに負荷をかけないよう、アプリ容量には気を配らなければなりません。以下のデータ仕様は最上位品質ではないものの、高品質なラインで定めています。 SE:WAV形式 または AAC形式* BGM:AAC形式 *重要なサウンド(ブランドSE)や頻度の高いSEはWAV推奨、200KB超え+1秒以上のSEはAACを検討 AAC圧縮後の基本ライン:ステレオ音源256 kbpsの可変ビットレート(VBR)、サンプリングレート44.1/48kHz SEは瞬間に再生される用途なので、データがそのまま再生されるWAV(非圧縮・最高品質)が適しており、AAC(圧縮)は再生にデコード処理が走るためほんの少し遅延が起きるようです。※近年のスマートフォンの処理では、プロ以外にはその差は感じられないと思われますが。 この他にもオーディオの割り込み、プリロード(事前メモリ読み込み)など事細かに定義しなければならないこともありますが、ある程度のところでプロデューサー・エンジニアさんと共有し、詳細を詰めていきます。よくわからないところは悩む前に知見のある人と一緒に前に進める、内製開発のメリットです。 おわりに 開発の内容としてはまだ続きますが、一つの区切りとして、ここまでとさせていただきます。 熟知しない領域でここまで進められたわけは、ChatGPTをはじめとしたAIの活用でした。必要な観点を洗い出し、壁打ち相手として利用していき、説得力のある形になるまで考えを深めることができました。しかし、サウンド理論など掘っても掘っても全く底が見えない。。そこで、私には 社内で共通認識を持つことのできる範囲での定義 をすることが重要でした。専門的になりすぎず、プロジェクト内で理解されやすい仕様書を作ることやコミュニケーションに気を配っています。(例えば、音量はdBFS値を使わず、基準点を設けて相対スケール値で表し、理解しやすい0.0-1.0の数値で定義するなど) それでもなお、サウンドは非常に奥深く、ここでは欠けた内容も多いことは承知しています。また、音楽は人によって(もっというとその時の精神状態によって)感じ方が異なる感性の塊のようなものです。そういう類のものをユーザー体験の中に落とし込んでいったプロセスを紹介しました。 最後に、KINTOテクノロジーズの開発ではMVP(Minimum Viable Product)の考えが浸透していますので、共感を得られれば、アイデアをスピーディに組み立てて開発まで進めることができます。そして、ユーザーの反応を見ながら、アップデートを繰り返していくことができます。これはその一つの事例でもあり、そのような開発にクリエイティブがどう関わっているか、その一端を感じていただけたなら嬉しく思います。最後までお読みいただき、ありがとうございました。
アバター
Hello, I am Udagawa, an engineer working on Prism Japan . I would like to introduce you to our marketing initiatives that use React Email to automatically send emails. Challenges We Faced in Our Marketing Initiatives Prism Japan was launched in August 2022, and since the beginning of the service, it has acquired users through various marketing initiatives. However, there is no guarantee that once we acquire users, they will continue to use the service. Although it has been about two and a half years since the service started, the number of dormant users is still on the rise. To address this issue, we implemented a re-visitation (re-engagement) initiative using push notifications, but we faced several challenges. Push notifications do not reach users who have turned off their notification settings. Even if we send push notifications encouraging users to revisit the app, they do not reach users who have uninstalled it, so we cannot achieve the desired effect. In fact, the push notification consent rate at only about 48%, and considering this rate along with uninstalled users, the number of users who actually receive notifications is quite limited. Furthermore, because they receive notifications from other apps as well, ours tend to get buried among them. In this way, there were limits to the effectiveness of our re-visitation initiative using push notifications. On the other hand, we ask users to register their email addresses at the time of their membership registration. The consent rate for emails registered in this way remains at a very high level of about 90%. Even if users have deleted the app, emails can still reach those who have not canceled their membership, making this a suitable marketing channel for the re-visitation initiative. However, from an operational perspective, there were several challenges to this initiative. First, marketing resources were limited, with a single staff member handling a wide range of tasks from planning initiatives to managing social media. Creating email content requires a lot of man-hours for manually tabulating rankings, selecting appropriate images, designing layouts, and so on. Considering the limited resources of the marketing staff, frequent delivery was difficult. Therefore, although we recognized frequent email delivery as an effective marketing method, it was not realistic due to the operational burden. Using React Email To Automate Email Creation Thus, we came up with the idea of automating the entire process from email creation to delivery . If we can build a system that automatically collects the information to be displayed in the content, creates email content automatically, and sends emails automatically at scheduled dates and times with a predetermined layout, we can send emails tailored to users even with limited human resources. However, we, as engineers, struggled with how to implement the process of automatically creating HTML emails. If we implement processing that directly manipulates HTML, reusability will become low, and issues such as differences in display depending on the receiving mailer will occur. Looking ahead to future content replacement, it is necessary to implement a solution with high reusability that allows flexible addition of new content. Amid these challenges, we discovered a library called “React Email.” This “React Email” has the following features: Ability to create HTML emails using JSX Real-time preview function High reusability through componentization What is especially important is that reusable componentization allows for easy addition of new content when its creation is required. Because React Email is written with React, dynamically replacing the content becomes easier. These advantages enable the delivery of personalized content at low cost by dynamically replacing content based on user behavior and interests. Instead of sending the same content to all users simultaneously, delivering content tailored to each user's interests can be expected to achieve high revisit rates and improved engagement . By utilizing React Email, we gained a clear prospect of effectively resolving the challenges in our email delivery initiatives, enabling us to move forward with efficient user re-visitation initiatives. HTML Generation Using React Email From here, I will cover the implementation details. In the implementation, we use React Email to generate the HTML for emails. We adopted a process in which HTML is generated from JSX using the render function of React Email. First, we created the following components: import React from "react"; const AppCheckSection = () => { return ( <div style={{ padding: "20px 0", borderBottom: "1px dashed #cccccc" }}> <div> <p> 詳しいスポットの情報やアクセス情報はアプリで確認してみましょう。 <br /> 他にも、アプリではあなたにだけのおすすめスポットを掲載中! </p> <a style={{ padding: "10px 70px", background: "rgb(17,17,17)", borderRadius: "5px", textAlign: "center", textDecoration: "none", color: "#fff", display: "inline-block", marginBottom: "10px", }} > <span>アプリをチェック</span> </a> <br /> <a href="https://deeplink.sample.hogehoge/"> うまく開かない方はこちら </a> </div> </div> ); }; export default AppCheckSection; In this way, we created components for constructing emails. Then, simply combining the created components in the parent component completes the email template. import React from 'react'; import AppCheckSection from '../shared/AppCheckSection'; import FooterSection from '../shared/FooterSection'; import RankingHeaderSection from './RrankingHeader'; import RankingItems from './RankingItem'; export type RankingContents = { imageURL: string; name: string; catchPhrase: string; }; export type WeeklyRankingProps = { areaName: string; contents: RankingContents[]; }; const WeeklyRanking: React.FC<WeeklyRankingProps> = ({ areaName, contents }) => { return ( <div style={{ backgroundColor: '#f4f4f4', padding: '20px 0' }}> <div> <RankingHeaderSection /> <RankingItems areaName={areaName} contents={contents} /> <AppCheckSection /> <FooterSection /> </div> </div> ); }; export default WeeklyRanking; To generate the email HTML, React Email's render function is used. Using fetchRegionalRankingData, it is possible to obtain different content information for each residential area and create emails accordingly. import { render } from '@react-email/render'; import { WeeklyRankingEmail } from '../emails/weekly-ranking'; import { fetchRegionalRankingData } from './ranking-service'; export async function generateWeeklyRankingEmail(areaName: string): Promise<string> { const contents = await fetchRegionalRankingData(region); const htmlContent = await render(await WeeklyRanking({ areaName, contents })); return emailHtml; } The HTML generated by the render function is used as the e-mail body sent via the SaaS service's API. In batch processing, ECS is activated at the timing scheduled by EventBridge to execute the email creation and sending processes. The content of emails actually sent is like the following one: The images show content focused on the Kanto region, but we implement a system capable of flexibly changing the content according to the region set by the user. Therefore, if the user’s residence is Osaka, the ranking for the Kansai region will be delivered to the user by email. React Email has a preview function that allows us to proceed with email implementation just like when developing normally with React. Implementation without the preview would be extremely difficult, so this function was extremely helpful. By leveraging this function, we were able to proceed with implementation work while checking layouts with the marketing staff. Through componentization, we structured various elements such as footers and app launch promotion sections, in addition to ranking, as reusable parts. By mixing existing components also when creating new content, efficient and consistent email delivery becomes possible. Scheduled email delivery may result in repeatedly sending similar content, which can lead to a decline in user interest or, in the worst case, the emails may be marked as spam and rejected. Even in an automated system, the delivery of content that continuously attracts user interest should be required. Considering such a situation, we believe that a highly reusable design through componentization, which enables quick changes to the content to be delivered, is important. Effect of Automated Email Delivery As a result of starting automated email delivery using React Email and batch processing, the number of installations increased starting around the day we started the delivery (February 22). We believe that this has made dormant users who saw our emails become interested in the app and encouraged them to reinstall it . In addition, the number of daily active users (DAUs) around email delivery dates significantly increased and has shown a sustained upward trend since the start of the automated email delivery initiative. In this way, we succeeded in encouraging dormant users, including those who had uninstalled the app, to revisit. Summary By automated email delivery utilizing React Email, we succeeded in reviving dormant users and increasing DAUs without manual intervention. Many marketing staff may be struggling with the issue of having many dormant users and limited marketing resources in app development. Automation of email creation using React Email reduces the burden of coming up with email content weekly and enables efficient and effective marketing activities . Furthermore, we found "React Email” highly useful for continuously improving and quickly releasing content. We found that, even in today’s world with diversified communication methods, email delivery can still function effectively as a marketing channel if we deliver content aligned with user interests . If you're struggling with stagnant revisit rates or looking for ways to revive dormant users, this approach is definitely worth considering.
アバター
I'm feeling a bit nervous writing this blog after ages. I'm Sugimoto from the Creative Office at KINTO Technologies (KTC for short). In 2024, on our third year since the company was founded, we gave our corporate website a full redesign. Three years after our founding, the project began when the HR team requested a new recruitment-focused website to help attract more people to join us in the future. Since the corporate website is centered around recruitment, we interviewed not only the Human Resources team but also members of management to understand the company's direction, as well as engineers from the Developer Relations Group to capture voices from the front lines. Questions like “What kind of people does the company truly want?” and “Who do we genuinely want to work with?” guided our conversations. As we listened to various perspectives—along with their challenges and aspirations—the purpose of the corporate website gradually came into focus. "Let's create a website that shows what KTC is all about to engineers and creators who stay curious about technology, keep up with the latest trends, and take initiative." That goal shaped our concept. The concept is "The Power of Technology & Creativity." We picked this word to reflect our drive to lead Toyota's mobility services through technology and creativity. Setting a concept might feel like an extra step, but it gives everyone a shared point to return to, especially important when different roles are involved and the project starts to drift. "Do we really need that feature?" "Can't we make it more engaging?" With this concept in place, even questions from a different angle make it easier to say, "That’s why we'll do it." Or, "That's why we won't." Personality Settings The next step for us was to define a brand personality, a clear picture of what kind of person the company would be, and how it would behave if it were human. (More on brand personality below .) Creating a brand personality from the ground up takes time and effort, often requiring input from across the company. However, since the main goal of launching the corporate website was recruitment, speed was a priority. So we built on what was already in place within our company: our vision, values, culture, and work attitude. The personality we landed on for KTC is, simply, "creator." As creators, we define ourselves as those who use technology and creativity to build the best products for our users: products that are intuitive, clear, thoughtful, and useful. Creating an Exciting Mood Board With the brand personality set, the next step is figuring out how to reflect that in the design of the corporate website. So, one more step! Before the lead designer jumped in, the whole Creative Office came together to build a mood board. This gives us a visual anchor to return to; just like the concept itself, which helps keep things on track and makes the rest of the process smoother. Each designer brought in visuals they felt captured the KTC vibe, and the mood board session turned into a lively exchange. Creating a mood board also led to some new discoveries. I imagined the output would reflect the vibe of a shiny, fast-paced California tech company. But when we shifted our perspective to ask, 'Who are we, really?', the answer became clear: we are (or aspire to be) a professional engineering group that embraces the spirit of Japanese craftsmanship rooted in the Toyota Group’s gemba philosophy. The mood board was inspired by globally recognized modern systems and our defined brand personality. Our goal was to create a corporate website that offered high usability while visually expressing our brand identity. Achieving a Jump in Creativity and Efficiency By clearly defining the website’s "personality," "mood," and "purpose," everything came together with a strong sense of consistency—from the photo tone and interview content to the copywriting and implementation. It really highlighted how that clarity can enhance both creativity and efficiency. It also made it easier to explain the design logically to non-designers, helping us put even abstract ideas into words. Honored to Receive International Recognition Our newly redesigned corporate website has received several international web design awards, including the prestigious CSS Design Awards. We'd love for you to take a look. And if something clicks, we hope it sparks your interest in us! Check out the website here! https://www.kinto-technologies.com/ ※What is brand personality? It represents what kind of traits and personality a brand (or company) would have if it were a person. This is called its archetype. We use a common framework that breaks "personality" into 12 types. This helps us explore a company's character, thinking, behavior, and distinctive features. Having a clear brand personality makes it easier to present a consistent image. Even if the audience isn’t consumers, you can still leave a strong and unified impression on your target—whether it’s through a corporate website like this or event giveaways.
アバター
こんにちは!KINTOテクノロジーズ株式会社の大阪採用担当、Okaです。 このたび、私たちOsaka Tech Labは新しいオフィスに移転しました。この記事では、その舞台裏と新オフィスの魅力をお届けします! Osaka Tech Labとは Osaka Tech Labは、2022年に心斎橋で開設した西日本のエンジニアリング拠点です。このたび、JR大阪駅直結のビルに移転し、さらにアクセスが良くなりました。 ソフトウェア開発、クラウドインフラ、データ分析など、さまざまな分野のエンジニアが集まり、自社プロダクトの開発・改善に取り組んでいます。 みんなで作り上げた「Osaka Tech Lab 2.0」 コンセプト誕生の経緯 オフィス移転をきっかけに、「Osaka Tech Lab 2.0」プロジェクトがスタート! このプロジェクトは、最初から誰かが用意していたものではありません。メンバー自身が「こんな場所にしたい」と想いを持ち寄って、みんなで作り上げたものです。 その中で生まれたのが、「集GO!発SHIN!CO-LAB」というコンセプト。 「単なる業務スペースではなく、大阪らしさや文化を活かしながら、新しい価値を“みんなで創っていく”場にしたい。」そんな気持ちを込めて、これまでの活動を振り返りながら、みんなで名前をつけました。 ![](/assets/blog/authors/oka/osakarenewal/1.png =600x) 「この指とまれ」という文化 Osaka Tech Labでは、もうひとつ、私たちらしい合言葉が生まれました。それが「この指とまれ」です。やりたいことがある人が、「やってみたい」と声をあげる。そこに、「いいね」「一緒にやろう」と自然に人が集まってくる。そんな場面が、私たちの周りではよくあります。 この動き方を、みんなで「この指とまれ」と呼ぶようになりました。 ![](/assets/blog/authors/oka/osakarenewal/2.png =600x) 実行委員会形式で進めた新オフィスづくりも、この「この指とまれ」スタイルがきっかけ。誰かが声をかけて、そこに集まったメンバーで、一緒に手を動かしながら作り上げてきました。そんな想いが詰まった、新しいオフィス。ここからは私たちのオフィスの一部をご紹介します! 新オフィスの魅力をご紹介! ![](/assets/blog/authors/oka/osakarenewal/3.png =600x) オフィスの床には、会議室へと続く道路のラインがあしらわれています。 🛝 PARKエリア | 靴を脱いで、ほっと一息 ![](/assets/blog/authors/oka/osakarenewal/4.png =600x) 靴を脱いで、ゆったり過ごせる土足禁止のリラックス空間をつくりました。カジュアルなミーティングやちょっと一息つきたいときにぴったりの場所です。さっそく全社MTGでも、自然とみんなが集まるお気に入りの場所になっています。 ![](/assets/blog/authors/oka/osakarenewal/5.png =600x) 🚗 会議室の名前も、Osaka Tech Lab流 会議室には、ガレージやピットをモチーフにした名前をつけています。その中でも「モータープール」など、大阪らしさとモビリティを掛け合わせたユニークな名前も。 ※「モータープール」:大阪でよく使われている「駐車場」を意味する言葉です。 ![](/assets/blog/authors/oka/osakarenewal/6.png =600x) Slackでブレストを重ねる中で、雑談から自然と生まれたこのネーミング。みんなで楽しみながら決めた、“私たちらしい”名前になりました。 ![](/assets/blog/authors/oka/osakarenewal/7.png =600x) (ちなみに、大阪ならではのユーモアも交えながら、アツい議論が繰り広げられながら決まりました!) ![](/assets/blog/authors/oka/osakarenewal/8.png =600x) 🛣️OSAKA JCT KINTOの室町オフィス同様「OSAKA JCT」という、発信スペースも誕生しました。壁のデザインはOsaka Tech Labのデザイナーが、みんなで考えたコンセプトをカタチにした、自慢のクリエイティブです。 ![](/assets/blog/authors/oka/osakarenewal/9.png =600x) オフィスの開所式は、このJCTを活用しながら「この指とまれ」でメンバーを募り、実行委員会形式で企画・運営しました。移転式もメンバー主導で進め、マネージャー陣を招いて社内の決起会を実施。すべてが「みんなで作った」手作りのイベントでした。 ![](/assets/blog/authors/oka/osakarenewal/10.png =600x) 新オフィスについて、メンバーからはこんな声も届いています。 仕事へのモチベーションが自然と上がり、背筋が伸びる感覚になります。 共有スペース「PARK」は開放感があり、大人数でも自然と集まれる心地よい場所。 モビリティをモチーフにした工夫がオフィスのあちこちに。場所の名前や標識、道を模した床、タイヤの机、クルマ型の移動式ベンチなど、細部にまで遊び心が散りばめられていて、歩いているだけでわくわくします。 開所式でメンバーにインタビューしたところ、「自分たちの声がオフィスに反映されているのが嬉しい」「”自分たちの場所”として愛着が持てる」といった声がたくさん届きました。 Osaka Tech Labで感じたこと 実は、この「みんなでつくる」という空気は、旧オフィスの頃から変わっていません。 ![](/assets/blog/authors/oka/osakarenewal/11.jpeg =600x) 旧オフィスの閉所式は、みんなでお酒を持ち寄って乾杯する、あたたかくてゆるい飲み会でした。部署も肩書きも関係なく、ふらっと集まって、気づけばわらわらと飲み会が始まっている——そんな文化が、Osaka Tech Labには自然と根付いています。 採用担当として、この距離感や、自分たちの声を大事にできる文化こそ、Osaka Tech Labの大きな魅力だと感じています。新オフィスになった今も、この雰囲気はきっと変わりません。 これからも、未来のことを気軽に語り合える、そんな場所であり続けたいと思っています。 一緒に「集GO!発SHIN!CO-LAB」しませんか? イベント開催情報 Osaka Tech Labでは、私たちのカルチャーを体験できるイベントを定期的に開催しています。「この指とまれ」にピンときた方は、ぜひ気軽に遊びにきてください。コンセプトの具体的な取り組みとして、Osaka Tech Labのメンバーが日々の開発で得た知見やノウハウを共有するイベント「CO-LAB Tech Night」を開催いたします。 CO-LAB Tech Night vol.1 , 全部内製化 大阪でクラウド開発やってるで! #1 開催日時:2025年7月10日(木) 19:00-21:30 概要:クラウド開発をテーマに、クラウドインフラ、SRE、データ分析基盤を取り上げ、Osaka Tech Lab のメンバーが、現在の取り組みや、そこから得た知見を共有します。 詳細: https://www.kinto-technologies.com/news/20250702 CO-LAB Tech Night vol.2 , Cloud Security Night #3 開催日時:2025年8月7日(木) 19:00-21:30 概要:AWS、Google Cloud、Azureなどのマルチクラウド環境におけるクラウドセキュリティに関する話題を中心に、各社の取り組みを通じて、クラウドセキュリティの知識を深めるイベントです。今回は、東京で開催している「Cloud Security Night」の第3回目を大阪で開催いたします! 詳細: https://www.kinto-technologies.com/news/20250709 ![](/assets/blog/authors/oka/osakarenewal/12.png =600x) 最新情報はOsaka Tech Labの特設サイトで! Osaka Tech Labでは、今後もエンジニアリング・クラウド・データ分析など、さまざまなテーマでイベントやTech Blogを通じて発信を続けていきます。イベント情報は、Osaka Tech Lab特設サイトにて随時更新予定です。気になる方は、イベント一覧(CO-LAB events)からぜひチェックしてみてください! ▼Osaka Tech Lab 特設サイトはこちら https://www.kinto-technologies.com/company/osakatechlab/ ![](/assets/blog/authors/oka/osakarenewal/13.png =600x) (Osaka Tech Labの特設サイトも、「この指とまれ」から生まれました) この記事と同時に公開されるOsaka Tech Labの特設サイトも、「この指とまれ」から生まれた取り組みのひとつです。 「もっと発信したい」「大阪のリアルな雰囲気をもっと届けたい」そんなメンバーの声から自然と人が集まり、企画・デザイン・執筆・公開まで、みんなで手を動かしながら作り上げました。東京のクリエイティブ室のメンバーも巻き込んで、まさにCO-LABで実現した、大阪らしい挑戦です。この特設サイトにも、私たちのカルチャーがぎゅっと詰まっています。ぜひ、のぞいてみてください。 カジュアル面談も実施中 「ちょっと話を聞いてみたい」「Osaka Tech Labの雰囲気をもっと知りたい」という方、どうぞお気軽に下記URLからお申し込みください! https://hrmos.co/pages/kinto-technologies/jobs/1859151978603163665
アバター
Hello. My name is Hoshino, a member of the DBRE team at KINTO Technologies. In my previous job, I worked as an infrastructure and backend engineer at a web production company. Over time, I developed a strong interest in databases and found the work of DBRE especially compelling, so I decided to join the DBRE team at KINTO Technologies in August 2023. The Database Reliability Engineering (DBRE) team operates as a cross-functional organization, tackling database-related challenges and building platforms that balance organizational agility with effective governance. Database Reliability Engineering (DBRE) is a relatively new concept, and only a few companies have established dedicated DBRE organizations. Among those that do, their approaches and philosophies often differ, making DBRE a dynamic and continually evolving field. For examples of our DBRE initiatives, check out the tech blog by Awache ( @_awache ) titled Efforts to Implement the DBRE Guardrail Concept , as well as the presentation at this year's AWS Summit and p2sk's ( @ p2sk ) talk at the DBRE Summit 2023 . In this article, I'd like to share a report on the DBRE Summit 2023, which was held on August 24, 2023! What is DBRE Summit 2023? This event is for learning about the latest DBRE topics and practices, as well as networking in the DBRE community. A total of 186 people signed up in advance via connpass , both online and offline, and many of them also participated on the day. Thank you to all the speakers and attendees for taking the time out of your busy schedules to help make the DBRE Summit a success! Linkage's Initiatives to Making DBRE a Culture, Not Just a Role Taketomo Sone/Sodai @soudai1025 , Representative member of Have Fun Tech LLC, CTO of Linkage, Inc., and Co-organizer of the DBRE Users Group (DBREJP) @ speakerdeck DBRE is not just a role, but a database-centered operational philosophy and a culture of maintaining databases as part of everyday product development activities. When a hero who can handle all databases emerges, it creates the risk of becoming overly dependent on that person. To prevent this, we should strive for a peaceful environment where stable operations don't rely on heroes. To achieve that, we need to build a strong organizational culture at the company level. While individual skill and enthusiasm are necessary, they alone can't build a culture. So, the first step is to create the environment. In addition, because design is directly linked to the security and operation of the database, there needs to be a culture in which developers practice DBRE. Database Reliability Engineering is a philosophy, and an operational style that aims to solve problems through systems rather than craftsmanship. DBRE focuses not on reacting to issues, but on preventing them in the first place. It's never too late to start! I realized that when putting DBRE into practice, it is very important to involve others rather than trying to do it all by ourselves. DBRE = Philosophy and Culture! To help build a company culture, I want to proactively engage in cross-functional communication! Current State of Mercari's DBRE and a Comparison of Query Replay Tools Satoshi Mitani @mita2 , DBRE, Mercari, Inc. Mercari's DBRE team was established about a year ago. Until then, the SRE team was in charge of the database. Initially, the system architecture consisted of just a monolithic API and a single database, but now it has been split into a monolith and microservices. The main responsibilities of the DBRE team include providing support for the databases owned by each microservice, answering various DB enquiries to resolve developers' concerns, and researching tools to increase productivity. When we started providing support for MicroService DB, we faced challenges, such as wanting to act proactively but not being able to see the issues easily and the DBRE team not being recognized. To address these, Developer Survey conducted, with multiple choice questions about what developers expect from DBRE DBRE Newsletter published every six months, with active communication from the DBRE team. These efforts have gradually raised awareness across the company, leading to an increase in requests. Other DBRE responsibilities include operational tasks related to the Monolith DB, and efforts toward modernization. To select a query replay tool capable of mirroring production queries, we defined key evaluation criteria and then conducted a survey. What is a replay tool? A replay tool reproduces production queries or traffic in a separate environment. It is used to investigate the impact of database migrations or version upgrades. Tools compared Percona query Playback A log-based, easy-to-use replay tool. MySQL-query-replayer (MQR) MQR is a tool built for large-scale replays, and you can really sense the creator Tombo-san's passion. I got the impression that the DBRE team is actively sharing organizational challenges through Developer Surveys and DBRE Newsletters. It was also very insightful to hear about the criteria and process used in evaluating replay tools. Introducing DBRE Activities at KINTO Technologies Masaki Hirose @ p2sk , DBRE, KINTO Technologies @ speakerdeck The DBRE team is part of a company-wide cross-functional organization called the Platform Group. The roles of DBRE are divided into two categories: Database Business Office Responsible for solving problems based on requests from development teams and stakeholders, as well as promoting the use of DBRE-provided platforms. Cloud Platform Engineering Responsible for providing database-related standards and platforms to promote effective cloud utilization while ensuring governance compliance. DBRE's activities are determined by defining four pillars and then deciding on specific activities based on the current state of the organization. Actual Activities Building a system to collect information on DB clusters DB secret rotation Validation: Aurora zero-ETL integrations with Redshift (preview) KINTO Technologies' DBRE team is building platforms to enhance the reliability of databases. To achieve this, we've chosen to solve the challenges through engineering. By using the cloud effectively to balance agility with database security and governance. By evolving these efforts into a company-wide platform, continue to drive positive impact on the business. We're proceeding these with an approach called Database Reliability Engineering. I was very impressed by how the team clearly defines the role of DBRE and leverages that definition to design organizational systems that both improve database reliability and contribute to the business. In the future, I hope to contribute to building even better systems based on the four DBRE pillars. Implementing DBRE with OracleDB: We tried it at Oisix ra daichi ~ Tomoko Hara @tomomo1015 , DBRE, Oisix ra daichi Inc. and Co-organizer of the DBRE Users Group (DBREJP) @ speakerdeck Among the many aspects of visibility that SRE/DBRE can provide, cost visibility tends to be overlooked. So, we're taking on the challenge of managing infrastructure costs across the entire company. Our approach involves reviewing the list of invoices to understand the actual state of the system and identify potential issues. Additionally, by evaluating cost-effectiveness, we contribute to improving business profit margins. Database costs make up a significant portion of overall infrastructure expenses. While databases are critical enough to warrant that investment, they must not be neglected or treated with complacency. To reduce database costs, we're implementing measures such as stopping databases used in development environments on days when they are not in use, and considering the most cost-effective approach. When using a commercial database, knowing the license type and its associated cost is very important in embodying DBRE. Conduct a license inventory to understand whether the licenses your company has contracted are appropriate. Take the time to think about how we can improve reliability, grow, and enjoy what we do, both now and in the future. By visualizing costs, many things become clear, so we encourage you to start by making costs visible as an approach to contributing to the business and improving reliability. It was very interesting to hear about cost visualization, which is something I don't often get to hear about. As mentioned in the talk, the database accounts for a large proportion of infrastructure costs and is a critical part of the system, so I felt it was very important to visualize it and evaluate its cost-effectiveness. Including cost aspects, I found it helpful and hope to contribute to solving such challenges as part of DBRE going forward. ANDPAD's Initiatives to Automate Table Definition Change Review and Create Guidelines Yuki Fukuma @fkm_y , DBRE, ANDPAD Inc. @ speakerdeck At ANDPAD, when a product team makes changes to table definitions, the DBRE team is responsible for reviewing them, and several issues have arisen in the process. For this reason, we felt the need to create a scalable mechanism to improve review efficiency. As part of our investigation, we decided to categorize the review comments from DBRE to the developers, and release small, incremental changes starting with those that we could address. We adopted this approach in order to get early results while moving forward. Automating Access Paths Although the database terms of use had already been created, it was hypothesized that they weren't being read much until they were actually needed. So, we created an access path that would display them at the necessary timing. As a result, the number of views increased and the frequency of comments during reviews decreased. Automating Table Definition Reviews A system was built to automatically review items that can be mechanically checked. This reduced the review costs for DBRE. By creating such a system, we not only improved review efficiency, but also made it possible to apply the process to products that had not previously been reviewed, enabling DBRE to automate table definition reviews. I found it impressive how the automation of access paths and table definition reviews made the process highly efficient and easy to use at the right time. This was very helpful and I hope to build something similar myself in the future. Michi Kubo @amamanamam , DBRE, ANDPAD Inc. @ speakerdeck A story about creating a course of action to ensure that table definition changes are implemented uniformly and of higher quality in production by all teams. One of the issues was that the quality of validation during table definition changes varied between teams, leading to migrations being carried out without sufficient validation, potentially causing service disruptions or failures. To address this, we conducted interviews and analyzed the causes. We then created clear guidelines to ensure the quality of validations. Overview of the guidelines Create a list of tasks to be completed before the actual executing Create a list of items to be included in pull requests Create a flow for considering release timing As a result of implementing these guidelines, validation results became more comprehensive and unified. I found it impressive how the team clearly identified the issues and organized guidelines and processes to improve quality, which helped raise awareness across the team and enhance reliability. As a DBRE team member, I'd like to organize guidelines in a way that motivates the whole team to empathize with the issues and collaborate in solving them. Panel Discussion: "The Future of DBRE" Taketomo Sone/Sodai @soudai1025 Satoshi Mitani @mita2 Tomoko Hara @tomomo1015 What's the best way to get started with DBRE? It might be a good idea to start by setting a goal and then determining what to do based on that. Identifying challenges and working to build a culture around addressing them is important. Database standardization might be a good topic to tackle first. What unique skills are required to practice DBRE? Since DBRE activities span across different teams, communication skills are essential. You need a personality that can respond positively under pressure. The ability to build trust is important. What makes DBRE an attractive career? This will enhance your DB expertise. Since the core technologies of databases don't change rapidly, the knowledge and experience you gain can be used for a long time. It'll broaden your perspective beyond databases to include applications as well. What are you looking to work on in the future? I'd like to engage in community activities as a DBRE. I'd like to accumulate more success stories as a DBRE. I hope DBRE will become a more widely recognized. I was a bit surprised to learn that DBRE requires more than just database knowledge. Of course, database knowledge is essential, but I realized that communication skills and a positive mindset are just as important for building a cross-organizational culture. I personally hope that DBRE becomes a role more and more people aspire to. Summary So, how was it for you? DBRE itself is still a developing field, and only a limited number of companies have adopted it so far. That's why the DBRE Summit was such a valuable opportunity to learn about the DBRE initiatives of various companies. Having recently transitioned from backend engineering to DBRE, I'm not yet a database specialist. However, through this summit, I came to recognize that working on database improvement tasks and building cross-functional cultural foundations are also important activities of DBRE. https://youtube.com/live/C2b93fgn05c
アバター
Hello Hi there—this is MakiDON, joining the company in December 2024! In this article, I asked our December 2024 new joiners to share their first impressions right after joining. I've put their thoughts together here. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview! Fsk Self-introduction I work on frontend development in the Business System Group, part of the Business Systems Development Department. So far, I've been doing frontend using Nextjs, always aiming to build user-friendly interfaces. There's still plenty for me to learn, but I'll do my best to be helpful in any way I can. How is your team structured? There are five of us, including me. We've got one PM, two front-end engineers, and two back-end engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Using generative AI tools like Copilot and ChatGPT has been a huge help. I was a bit nervous before joining, but everyone was so warm and welcoming that I quickly felt at ease. What is the atmosphere like on site? I really appreciate how easy it is to ask for help when I run into something. How did you feel about writing a blog post? I think it's great to have the opportunity to share my thoughts and feelings with everyone. Question from Frank to Fsk If you could hand off just one boring daily task to a robot, what would it be? Definitely, cleaning! It eats up time every day, and I'd much rather spend that time doing something else. Takahashi Self-introduction I work as a project manager for the Owned Media Group and the Marketing Product Development Group. I focus on helping everyone move toward a shared goal—acting as a good partner to our clients and internal teams, and as a bridge between engineers and business divisions. In my previous job, I gained experience as a web designer. Later, I transferred to the Development Department, where I managed a range of platform-related areas, including membership systems, payments, points, and facility information. How is your team structured? The Owned Media Group has one project manager and two engineers. The Marketing Product Development Group focuses on static content and includes a team leader, a project manager, a tech lead, and two engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression was how quiet the office was. At my previous job, the sales team was on the same floor and right nearby, so it was always noisy. As for any gaps or surprises, I'd say that each group has its own development style. You kind of need to stay flexible and ready to adapt your mindset. What is the atmosphere like on site? It's quiet. So quiet that I feel like I need to be a little mindful when tossing a can into the trash. How did you feel about writing a blog post? During my self-intro at work, I think the only thing really came across was that I'm into Monster Hunter. So I'm glad to get the chance to write the article. Question from Fsk to Takahashi Do you prefer World or Wilds? lol I'd say Wilds, especially with all the upcoming updates to look forward to! Hoping it becomes something we can enjoy for over 10 years, just like World! Generative AI is currently being used in the design field, and many engineers are being called "AI prompt engineers." What do you think about this trend? As long as people are careful not to infringe on copyright or image rights, I think it's totally fine to let generative AI handle certain tasks. That said, I don't think it's suitable in contexts like contests or competitions where creativity is what's being judged. Lyu Self-introduction I currently belong to the Business System Group in the Business System Development Division, where I mainly work on backend system development. My day-to-day work involves designing, implementing, operating and maintaining various systems that help streamline internal operations and improve data integration. I always keep stability and scalability in mind when developing systems. Previously, I worked at IBM, where I was involved in developing medical information systems for major hospitals in Japan. I've had hands-on experience across the entire process—everything from requirements gathering and design to development, rollout, and after-sales support. I've always aimed to build systems that truly meet the needs of users on the ground. Drawing on that experience, I continue to work on deepening both my technical skills and understanding of the business so I can deliver systems that are even more practical and valuable. How is your team structured? There are five of us, including me. We've got one PM, two front-end engineers, and two back-end engineers. Everyone was a pro in their own area, and I learned a lot from being part of the team. What was your first impression of KINTO Technologies when you joined? Were there any surprises? The first thing that stood out was how warm and welcoming everyone was. There's a relaxed atmosphere where people communicate freely, without being overly concerned about hierarchy. I was also impressed by the wide range of in-house events and active club activities—there's always something going on. The benefits are really employee-friendly too, which makes it a great place to work. There wasn't a big gap between what I expected and what I actually experienced. If anything, the work environment turned out to be even better than I had imagined. What is the atmosphere like on site? It's bright and really enjoyable. Of course, we talk about work, but it's also easy to share fun ideas or little things that happen during the day. The team members are all close to each other and it's easy to get along with anyone, so you can work with peace of mind. How did you feel about writing a blog post? I'm really glad to have the chance to share my experiences like this. I hope that something from my daily work or thoughts can help someone out there, even just a little. Question from Takahashi to Lyu If you were to buy a car through KINTO, which car would you like to drive? I'd definitely go for the Crown. I've always thought it looked cool. Plus, I actually use this model a lot when creating test data at work, so I've kind of grown attached to it. lol The employee discount program also makes it possible to get a Crown at a really reasonable price, which is a big plus. On top of that, the range of customer-friendly services, like the comprehensive insurance plan, really makes the whole package feel impressive. MakiDon Self-introduction My name is MakiDon, and I joined the company in December. I belong to the Marketing Product Development Group in the Mobility Product Development Division. I mainly handle data analysis and machine learning tasks. My main role is to identify issues through data analysis, propose strategic solutions and exit plans, and support system design using machine learning. Before this, I worked as a project manager at a startup focused on architecture and IT. How is your team structured? I'm in the the Data Analysis and ML Utilization Team. We're a group of eight: one PjM (Project Manager)/PdM (Product Manager), one Scrum Master, and six engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Since KTC is part of a large corporation, my first impression was that it'd be a pretty traditional and stable company. But once I joined, I saw generative AI being used in Slack and AI actively integrated into various systems. It quickly became clear that the company is a tech company and has a fast-moving, startup-like energy—much more than I expected. What is the atmosphere like on site? It's a very open and supportive environment. Not only within the team but across departments, people are quick to offer help. You can ask for advice anytime, which makes it easy to work with peace of mind. How did you feel about writing a blog post? I Actually, I got to write a Tech blog before this main post. I'd never written a blog before, but thanks to the support and advice from my team, I was able to write it without any worries. It turned out to be a really valuable experience. I'll continue to do my best to share my new knowledge and experience both inside and outside the company! Question from Lyu to MakiDon What are you most proud of in your work so far? By bringing in-house the output we'd previously generated using machine learning tools, we managed to cut costs and boost click-through rates! Frank Neezen Self-introduction I’m Frank Neezen, a member of the Business Development Department, Officially titled Business Development Manager. My primary role, however, is as the Technical Architect, where I help guide the design and implementation of our core global full-service products. My background lies in consulting, where I’ve focused on advising clients on leveraging Salesforce to meet their technical and operational needs. How is your team structured? My direct team consists of 4 team members with a diverse skillset. We collaborate closely with our engineering team to develop software solutions for the global full-service lease business. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My transition from Salesforce in Amsterdam to KTC in Tokyo was remarkably smooth! I had some initial concerns about adapting to the cultural differences, but the exceptional onboarding process and the warm, supportive team at the Jimbocho office made all the difference. From day one, their welcoming attitude helped me settle in effortlessly. My main hurdle, however, was organizing all my personal affairs, for example sorting out banking or registering within the neighborhood without being able to speak Japanese. I had lots of help though from KTC with these activities. What is the atmosphere like on site? Our team is based together in the Jimbocho office, next to many of the engineers. The vibe is open and professional, but also relaxed. The atmosphere is open, professional but relaxed. There is a good team feeling, we all want to succeed with our work. How did you feel about writing a blog post? I have written articles in the past for other topics though mainly related to Salesforce. Always happy to write up and share my personal story of joining KTC! Question from MakiDon to Frank Was there anything that surprised you when you came to Japan? I'm amazed by how safe Japan is, walking around anywhere in Tokyo, the biggest city in the world feels completely secure! Also what truly surprising is that if you lose something, like a wallet or phone, it almost always finds its way back to you. I have had a few times when I did not even realize I lost something but then someone would randomly come up to me with my lost item. Such a refreshing experience! Finally Thank you everyone for sharing your thoughts on our company after joining it! There are more and more new members at KINTO Technologies every day! We'll be posting more new-joiner stories from across divisions, so stay tuned! And yes — we're still hiring! KINTO Technologies is looking for new teammates to join us across a variety of divisions and roles. For more details, check it out here !
アバター
まいど、おおきに( º∀º )/ 技術広報G イベントチームのゆかちです。 2025年7月、Osaka Tech Labにも念願のイベント会場が…! 今回はそんな Osaka Tech Lab JCT の行き方を簡単にですが紹介しちゃいます! ![accessosaka1](/assets/blog/authors/uka/accessosaka/jct.png =600x) Osaka Tech Lab JCT、40人ほど着席可能 住所:〒530-0001 大阪府大阪市北区梅田三丁目1番3号ノースゲートビルディング20階 JR大阪『中央口改札(1F)』、『連絡橋口(3F)』 2分 OsakaMetro 梅田駅 『北改札』5分 阪急電車 梅田駅『2階中央改札』7分 阪神電車 梅田駅(連絡橋)7分 場所はルクア1100の隣になります。 案内看板の『ルクアイーレ』や『オフィスタワー』が目指す先です! 阪急、阪神電車で来る方はまずは『JR大阪駅』方面へ! ![accessosaka1](/assets/blog/authors/uka/accessosaka/0.png =600x) 目印に! ![accessosaka1](/assets/blog/authors/uka/accessosaka/2.png =600x) JR大阪駅は『中央口改札』もしくは『連絡橋口改札』を出てオフィスタワー方面へ ![accessosaka1](/assets/blog/authors/uka/accessosaka/1.png =600x) Osaka Metroの場合は『北改札』を出てオフィスタワー方面へ ![accessosaka1](/assets/blog/authors/uka/accessosaka/3.png =600x) 1階から来る方はこちら ![accessosaka1](/assets/blog/authors/uka/accessosaka/4.png =600x) 3階から来る方はこちら ![accessosaka1](/assets/blog/authors/uka/accessosaka/5.png =600x) エスカレーターを登り4階が連絡通路になります、3階から乗るともっと短いヨ ![accessosaka1](/assets/blog/authors/uka/accessosaka/6.png =600x) 正面の自動ドアまでまっすぐ ![accessosaka1](/assets/blog/authors/uka/accessosaka/front.jpg =600x) 正面入口を入り右へ :::message ビルのセキュリティ上、時間帯によっては正面入口が施錠されている場合がございます。 その際は、正面入口に設置している「QRコード付きの案内ボード」からご連絡ください!スタッフが順次お迎えにまいります。 ::: ![accessosaka1](/assets/blog/authors/uka/accessosaka/7.png =600x) 手前のエレベーターにて20階までお越しください ![accessosaka1](/assets/blog/authors/uka/accessosaka/8.png =600x) エスカレーター降りてすぐがKINTOテクノロジーズです!Welcome! さいごに 以上、Osaka Tech Labでのお時間を楽しく過ごしていただけたら嬉しいです! 足を運んでいただきありがとうございました! またのお越しをお待ちしております(^_^)/
アバター
My name is Nakagawa, and I am the team leader of the data engineering team in the analysis group at KINTO Technologies. Recently, I have become interested in golf and have started to pay attention to the cost per ball. My goal this year is to make my course debut! In this article, we would like to introduce the efforts of our data engineering team in efficiently developing KINTO's analytics platform and providing the data necessary for analysis in line with service launches. Data Engineering Team’s Goal The data engineering team develops and operates an analytics platform. An analytics platform plays a behind-the-scenes role that involves collecting and storing data from internal and external systems, and providing it in a form that can be utilized for business. Our goal is as follows so that data can be utilized immediately upon the launch of services: __ "In line with the launch of various services, we will aggregate data on our analytics platform and provide it immediately!"__ Challenges However, with the expansion of the KINTO business and while we set the above-mentioned roles and goals, the following challenges have arisen. Limited development resources (as we are a small, elite team) An increase in systems to be linked due to business expansion An increase in modifications is proportional to an increase in the number of linked systems. (Note: The increase in modifications is also influenced by our agile business style of "starting small and growing big.") Solutions To solve the above challenges, we use AWS Glue for ETL. From the perspective of reducing workloads, we have focused on two aspects―operations and development. We have approached the challenges using the following methods. Standardization aimed at no-code Automatic column expansion for a faster, more flexible analytics platform Our company’s AWS Analytics Platform Environment Before explaining the two proposed improvements, I would like to explain our analytics platform environment. Our analytics platform uses AWS Glue for ETL and Amazon Athena for the database. In the simplest pattern, its structure is as shown in the diagram below. The structure involves loading data from source tables, accumulating raw data in a data lake in chronological order, and storing it in a data warehouse for utilization. When developing workflows and jobs for data linkage using AWS Glue, KINTO Technologies use CloudFormation to deploy a series of resources, including workflows, triggers, jobs, data catalogs, Python, PySpark, and SQL. The main resources required for deployment are as follows: YAML file (workflow, job, trigger, and other configuration information) Python shell (for job execution) SQL file (for job execution) As mentioned above, the development work workloads increased in proportion to an increase in services, tables and columns. This began to strain our development resources. As described in the previous solutions, we addressed the challenges by implementing two main improvements. I would like to introduce the methods we used. Standardization aimed at no-code "Standardization aimed at no-code" was carried out in the following steps. Step 1 in 2022: Standardization of Python programs Step 2 in 2023: Automatic generation of YAML and SQL files In the improvement related to Python shell in Step 1, we focused on the fact that, up until now, workflow development was performed on a per-service basis, and the Python shell was also developed, tested, and reviewed on a per-workflow basis. This approach led to an increase in workloads. We moved forward with program standardization by unifying parts of the code that had been reused with slight modifications across different workflows, and by making them more general-purpose to accommodate variations in data sources. As a result, while we are currently focusing on intensive development and review of the common code, there is no longer any need to develop source code for each workflow. If the data source is Amazon RDS or BigQuery, all processing, including data type conversion to Amazon Athena, can now be handled within the standardized part. Therefore, when starting data linkage for each service, it is now possible to achieve no-code data linkage by simply writing settings in a configuration file. Step 2, the automatic generation of YAML and SQL files, improves the configuration files that remained as necessary parts in Step 1, as well as View definitions required for linkage with the source side. We improved these by using GAS (Google Apps Script) to automatically generate configuration files such as YAML and SQL for the View. This minimizes the development work by simply setting the minimal necessary definitions, such as workflow ID and table names that need to be linked, on a Google Spreadsheet, which automatically generates YAML files for configuration and SQL files for the View. Automatic column expansion for a faster, more flexible analytics platform In "Automatic Column Expansion for a Faster, More Flexible Analytics Platform," before the improvement, table definitions and item definitions that have been already defined at the data linkage source had been also defined on the analytics platform side in YAML.[^1] Therefore, at the time of initial establishment, it was necessary to define as many items on the analytics platform side as on the data linkage source side, resulting in a need for approximately 800 to 1,200 item definitions per service on average (20 to 30 tables × 20 items × both lake and DWH). Our company is constantly expanding its services based on the philosophy of “starting small and growing big,” which frequently results in backend database updates. This update process also requires carefully identifying and modifying relevant portions from among the previously set 800 to 1,200 definition items, which has significantly increased development workloads. So what we came up with was a method in which, when accessing the data linkage source for data linkage, the item definition information is also linked at the same time, allowing automatic updates of the item definitions on the analytics platform. The idea is that since the properly developed information is already present on the source side, there is no reason not to take advantage of it! The specific implementation method for column auto-expansion is carried out using the following steps. glue_client.get_table Retrieve table information from the AWS Glue Data Catalog. Replace table['Table']['StorageDescriptor']['Columns'] with the item list col_list obtained from the data linkage source. Update AWS Glue's Data Catalog with glue_client.update_table . def update_schema_in_data_catalog(glue_client: boto3.client, database_name: str, table_name: str, col_list: list) -> None: """ Args: glue_client (boto3.client): Glue client database_name (str): Databse naem table_name (str): Table name col_list (list): Column list of dictionary """ #AWS Glueのデータカタログからテーブル情報を取得 table = glue_client.get_table( DatabaseName = database_name, Name = table_name ) #col_listでColumnsを置換え data = table['Table'] data['StorageDescriptor']['Columns'] = col_list tableInput = { 'Name': table_name, 'Description': data.get('Description', ''), 'Retention': data.get('Retention', None), 'StorageDescriptor': data.get('StorageDescriptor', None), 'PartitionKeys': data.get('PartitionKeys', []), 'TableType': data.get('TableType', ''), 'Parameters': data.get('Parameters', None) } #AWS Glueのデータカタログを更新 glue_client.update_table( DatabaseName = database_name, TableInput = tableInput ) In addition to these, when creating an item list obtained from the linkage source, we also perform mapping of different data types for each database in the background. By doing so, we can generate item definitions on the analytics platform based on the schema information from the source side. One point we paid attention to with the automatic updating of item definitions on the analytics platform side is that the table structure of the analytics platform under our management could change unexpectedly without our knowledge. To address this concern, we have implemented a system that sends a “notification” to Slack whenever a change occurs. By doing this, we can prevent the issue of the table structure changing unexpectedly without our knowledge. The system detects changes, and after checking the changes with the source system, linkage of the changes to subsequent systems as needed is possible. [^1]: I won’t go into details here, but AWS Glue includes a crawler that updates the data catalog. However, due to issues such as the inability to update with sample data or perform error analysis, we have decided not to use it. Conclusion What are your thoughts? This time, I have introduced two methods of using AWS Glue in our analytics platform: “standardization aimed at no-code” and “automatic column expansion for faster, more flexible analytics platform.” By improving these two points, we have succeeded in reducing the development workloads. Now, even for a data linkage job involving 40 tables, the development workloads can be reduced to about one person-day, which has enabled us to achieve our goal of "aggregating data into the analytics platform and providing it immediately in line with the launch of various services!" I hope this will serve as a useful reference for those who wish to reduce development workloads in a similar way!
アバター
Introduction Hello! My name is Miura and I work in the Development Support Department at KINTO Technologies, assisting the Global Development Department. My day-to-day work includes managing tools for the Global Development Division, supporting office operations to create a smoother working environment for team members, and handling various inquiries. Lately, I've been really into following my favorite band. They're only active for one year, so I've been chasing their shows wherever I can! Now, back to the topic. Since most of my work involves a lot of detailed admin tasks, I try to find ways to make small improvements every day. In this article, I'll introduce some of the kaizen initiatives I've implemented at KINTO Technologies. Kaizen So Far At KINTO Technologies, being part of the Toyota Group, we often use the term kaizen rather than improvement. Here's how we define it:🔻 Kaizen refers to the practice of eliminating waste in tasks or workflows and continuously improving the way we work to focus on higher-value activities. ^1 Since joining the company, I've carried out the following kaizen activities: [1] Revising and updating mailing list management [2] Revising the logbook and approval route for lending security cards [3] Managing test devices [4] Creating name tags for shared umbrellas Let's take a closer look at the background, actions taken, and effects. [1] Revising and Updating Mailing List Management📧 This initiative began in my very first month after I joined the company, when I tried to call members for a meeting, but I had no idea who was on the mailing list. Although the Development Support Division where I belong, had an internal mailing list, the Global Development Division didn't have anything like that! So I thought, why not create a similar one? But first, I had to identify which mailing lists even existed. Once I pulled the data, I was shocked! There were 94 mailing lists currently in use! Are we really using all of these? This question led me to carry out a full audit. First, I followed the example set by the Development Support Division and created a similar list in Excel. I set up a matrix with registered members on the vertical (Y) axis, mailing lists on the horizontal (X) axis, and used a ● for registrants. Mailing List (Excuse the heavy redactions🤣) Each team leader reviewed the table, and I carried out an audit by confirming list administrators, clarifying the purpose of each list, and verifying registered members. To make the mailing list information accessible to everyone, I shared the table via our cloud storage, BOX. To prevent the list from becoming outdated, I set up a process where any update requests must be submitted through a JIRA ticket, and I retained sole editing rights. Having a list makes it easy to check who was registered to which list and what types of lists existed. It also helped raise awareness across the Global Development Division that mailing lists don't update automatically. Another benefit of visualizing all the mailing lists was the ability to check for duplicates created for similar purposes. This yokoten (horizontal deployment) was possible because, although I belong to the Development Support Division, I also support the Global Development Division. [2] Revising the Logbook and Approval Route for Lending Security Cards At the Jinbocho Office, external vendors who come in more than twice a week are given security cards. It's a simple process, but the Excel file used for tracking didn't keep any history. So, I updated it to support change tracking and made it possible to easily identify which cards were currently unused. By using conditional formats and functions, only available cards could be selected. This prevents the accidental deletion of user information and makes audits much easier. Now automatically display the number of cards and available card numbers. Regarding the change in approval route, because I belong to the Development Support Division, I couldn't submit requests for security card issuance on my own. I had to ask a member from the Global Development Division to do it on my behalf, just to follow the correct approval route. This roundabout process was not in line with the actual work, so I raised the question with the relevant division, "Shouldn't we change this odd workflow?" After that, we organized the role and system of concurrent duties of the two divisions. Now, when I submit a request, I can select either the Development Support or Global Development approval route. This change eliminated the need for others to step in on my behalf and reduced the time spent on individual coordination.✨ [3] Managing Test Devices📱 Until now, test and verification devices such as smartphones used during system development were managed in a table on Confluence. But this made it difficult to see at a glance who was using which device, and the table often went out of date. In some cases, certain devices ended up being managed informally by individuals. At one point, someone almost purchased a new device without realizing we already had one. Around the same time, I found out that company-purchased books were being centrally managed using JIRA. That got me thinking, could we manage test devices the same way? ➡️ How We Made Book Management Easier As we transitioned to JIRA, I took the opportunity to do a full inventory check. This gave us visibility into whether anything was missing, broken, or not in use. (Some devices were even locked with unknown passwords.🔒) Because test devices are used on a daily basis, we physically checked each one during the audit. We recorded the password settings and uploaded photos of each device to their respective JIRA tickets. This helped resolve confusion when device names alone weren't clear enough. By managing the devices in JIRA, all members can check the rental status at a glance, and by setting rental expiration dates, we can now track usage. Visualization of lending conditions, detailed device information is included in the ticket. In addition, there is no longer the hassle of forgetting to update Confluence when borrowing or returning, or having to contact them via Slack every time. Most importantly, by linking devices to specific users and assigning return dates, I feel that all members have become more aware that they are "borrowing" the device. I also set up JIRA to send reminders to the admin when return deadlines are approaching. Rina-san helped me implement this based on the existing book management system. Thank you so much for your support! [4] Creating Name Tags for Shared Umbrellas☔ It all started with a request to "clean up umbrellas that have been left in the umbrella stand at the entrance." So, I checked the other umbrella stands as well. Any umbrellas that had been left for several days were announced internally and then disposed of. One comment I received in response to that announcement mentioned the idea of making the umbrellas available to anyone for shared use by putting a plastic tape over the clean umbrellas to be disposed of and repurpose them as office loaners. I noticed that many of the umbrellas in the office stands were clear plastic or plain designs. I figured the number of abandoned umbrellas would probably keep growing, and people might start grabbing the wrong ones by mistake. That reminded me of writing my name on some masking tape and made a name tag with a rubber band for my umbrella in the past. lol That worked fine for me, but I thought it would be nice if everyone had a name tag if possible, so I prepared Keychains. A keychain with your name on it to secure your umbrella.👍 This kaizen is not yet widespread, but I hope it will be used more and more, not only for umbrellas, but also as name tags to be attached to personal items stored in the refrigerator. Where Does the Kaizen Mindset Come From? Let me share the origins of my kaizen mindset. I've always enjoyed imagining things ever since I was a child. On my way to school, I used to often imagine things like, "Wouldn't it be cool if the road just moved on its own? ✨" or, "What if a shield popped up automatically when it rained?✨" (Kind of like something out of Doraemon, right?😅) I think kaizen is just an extension of that kind of thinking. I believe that great people follow that imagination into careers in research or engineering, but in my case, since I'm at an average level, it's more about solving the problems right in front of me. When I find myself thinking "If only this were easier…🤔"— this is when kaizen starts. When it comes to work, the fundamental principle is "Making work easier" means "making work enjoyable." Who wouldn't be happier if their job got just a little bit easier? Eventually, those easier ways of working become the norm. The starting point is to make things easier for myself, but I also take the other person or people who will use it into consideration as I go along. Whenever I’m doing something repetitive or routine, I find myself thinking, "Wouldn't it be nice if this were easier?” It may be difficult to fully realize that idea by myself, but ideally, the things that have already become easier now will eventually become norms, and whoever takes over from me will go on to make them even better. I'd be thrilled if the improvements I made didn't stay as the final version, but went beyond me and continued to evolve in someone else's hands. Something like this is exciting to imagine, isn't it? Next Kaizen - The Next Issue I Want to Tackle Some recurring tasks are still handled in Excel, and I want to streamline them further, possibly by using macros. So, I've recently started trial and error using Sherpa ^2 which was just released internally, as well as ChatGPT. With a kaizen mindset at the core, I'll continue working to make things better!✨
アバター
I am Aritome from the Development Support Division at KINTO Technologies. I am in charge of organizing all-hands meetings, supporting engineer development and training programs. At KINTO Technologies (KTC), we support our engineers' growth through their work at the company. For this reason, we actively encourage participation in communities outside the company and speaking at external events. (President Kotera and Vice President Kageyama also frequently speak at externally hosted events.) On February 8, 2023, Wada-san, a young engineer from our data analytics team, joined a panel discussion as a guest speaker at the Digital Human Capital Development Seminar in Chubu , hosted by the Central Japan Economic Federation and the Digital Literacy Council. What did you talk about? What is your role at our company? I interviewed Wada-san after the seminar to find out more. To start with, could you introduce yourself? Wada : Hello! My name is Wada and I work as a data scientist at KINTO Technologies. My main job is responding to analysis requests from both inside and outside the company, and developing AI functions for in-house apps. Thank you for having me today! Aritome : Thank you! Can you tell us about your career path before joining KINTO Technologies? Wada : I majored in social informatics at university. It's not a familiar term, but basically, it's an applied field of informatics that focuses on using information and communication technologies to solve social issues. After graduating from university, I joined an automotive parts manufacturer in 2019, where I worked on production management systems. Then in 2022, I made the move to my current role. What was the theme of the event, and what led to you speaking at the event? Wada : The Digital Human Capital Development Seminar in Chubu was aimed at management and mid-level employees of various companies in the Chubu region, which stressed the importance of all employees acquiring digital literacy from now on. At the event, three specific qualifications that will lead to acquiring digital literacy were recommended. The Information Technology Passport Examination, Data Scientist Certificate, and JDLA Deep Learning for GENERAL (G-certificate) In the latter half of the event, a panel discussion was held featuring Ryutaro Okada, Board Director and Secretary General of the Japan Deep Learning Association, along with four panelists who had gained digital literacy by obtaining certifications. The discussion covered what they found beneficial about earning the certifications, challenges they faced, and how the experience has influenced their work. I also hold the JDLA Deep Learning for ENGINEER certification (commonly known as the E-Certificate) There was a call for panelists for the event within the certification holders' community, and that's how I got the opportunity to take part in the event. Photo of the event venue Aritome : I've been hearing a lot about the G-Certificate lately. Can you tell us more about it? Can you tell us more about the G-Certificate? Wada : The G-Certificate is a qualification that tests basic knowledge of deep learning. The G stands for 'Generalist,' and the test covers not only the meaning of technical terms, but also knowledge of the history of technology and legal regulations. It does not require much knowledge of math or coding, so it is also recommended for non-engineers! There's also a related qualification called E-Certificate, which is more focused on deep learning theory and implementation skills. If you hold either, you can join a community called CDLE (Community of Deep Learning Evangelists). That's the community where I found the call for panelists for this event. CDLE is a community exclusively for people who've passed either the G-Certificate or the E-Certificate, both run by the Japan Deep Learning Association (JDLA). It's a space for certified members to connect and share knowledge. It operates entirely on a non-profit basis. *Quoted from the CDLE guidelines, CDLE community website . Aritome : So, there's a community of certified members. With that shared learning experience, the conversation's sure to be lively! What motivated you to get certified in the first place? Wada : I thought that obtaining a certification would be the most efficient way to acquire systematic knowledge! When I first started learning about AI, I was mostly referencing sample code I found online and diving into machine learning and deep learning without really understanding how anything worked. At first, it was fun to see things run, but gradually I became interested in the mechanics behind. That's when I began reading more advanced books and technical blogs. However, learning this way gave me only bits of knowledge. It was tough to learn the field in a way that was both systematic and comprehensive. So I decided to take the certification exam, since its syllabus was packed with carefully curated content and suited for obtaining systematic knowledge. To put it in an analogy, it's like filling a container with your favorite pebbles, each representing bits of knowledge, but there are still gaps. The syllabus is like water that fills those gaps with structured learning! (Does that make sense?) Image of knowledge acquisition Aritome : I totally get that feeling of not knowing where to start when trying something new. When you're self-taught, it's hard to feel confident if your knowledge is all over the place. What challenges did you face and how did you approach studying for the certification? Wada : I had a certain level of understanding of how to use the technology from my self-study, but I had to re-learn the background, basic technology, history leading up to the technology, and legal frameworks. In addition, at that time, the E-certificate exam didn't use any specific frameworks, and the questions were based on scratch implementations using NumPy. Since I had been working with scikit-learn and Keras, getting used to the unfamiliar syntax was definitely a challenge. But I wanted to fill in the gaps in my knowledge, so it was a perfect match for my original goal, worth the effort (laughs). Aritome : Because it's a certification, I imagine you really have to study the full scope of the field, even areas you're not as comfortable with. It sounds like a challenge! Did getting the certification or studying a new field lead to any changes for you? Wada : Learning all the key terminology around AI gave me the confidence to start tackling more advanced books, including academic papers I wouldn't have dared to touch before. I can't say I breeze through them, but "Ohhh! I can read! I'm reading!"(laughs) Aritome : That sense of growth must make all the effort feel worthwhile! What were some of the best things about being certified? Wada : Nowadays, AI is being integrated into many different areas, creating significant value I think having the ability to look at different areas and ask, "What if I combined AI with this?"will become one of my personal strengths. With tools like ChatGPT lowering the barrier to entry, I believe we'll see even more accessible AI services emerging, and this trend will only continue to grow. At KINTO Technologies, are there any systems or cultural elements in place to support learning? Wada : There's a strong culture of sharing what we learn. We have study sessions across different scopes, within teams, across departments, and company-wide. Even small information sharing is encouraged. Our tech news Slack channel is constantly buzzing with interesting updates. You can also easily request to purchase books that are useful for work, and you can access a variety of books on the online bookshelf shared between offices. If the opportunity comes up, like my case, you're free to speak at external events, too! What kind of employees are there at KINTO Technologies? Wada : My first impression after joining was, "There are all kinds of people here!"(lol) At my previous job, almost everyone was a new graduates, so coming into a company where everyone is mid-career was a big change. Everyone brings their own specialty from past experience, and it's really inspiring to see those strengths complement each other to get things done! I am expected to work as a specialist in the AI field, which makes it a really rewarding environment where I can keep growing. Is there anything you personally do to promote a learning culture? Wada : I try to be open about my own skills, what I've been learning, and what I'm interested in. It leads to people saying things like "I found this article" or asking "Can you explain this?" While I'm explaining, I often learn something new, too. It creates a great feedback loop. Lastly, do you have a message for our readers? Wada : I wasn't able to talk much about technical side this time, but I'd like to write more about the AI products I work on in the future! Thank you for reading all the way to the end! We Are Hiring! We are looking for people to work with us to create the future of mobility together. If you are interested, please feel free to contact us for a casual interview. @ card
アバター
こんにちは!SREチームのkasaiです。 KINTOテクノロジーズ株式会社(以下、KTC)は、2025年7月11日(金)〜12日(土)にTOC有明で開催される「SRE NEXT 2025」にて、プラチナスポンサーとして協賛いたします! KTCがSRE NEXTのスポンサーになるのは今回が初めてです。 弊社SREチームは昨年から再スタートを切りました。 SREを実践する難しさを日々感じつつも、サービスの信頼性を高めるための活動に取り組んでいます。みなさんも同じように試行錯誤を重ねているのではないかと思います。 そんなSREの方々が集まる場を支えられればと思い、スポンサーに立候補いたしました! SRE NEXTとは 信頼性に関するプラクティスに深い関心を持つエンジニアのためのカンファレンスです。 同じくコミュニティベースのSRE勉強会である「SRE Lounge」のメンバーが中心となり運営・開催されます。 SRE NEXT 2025のテーマは「Talk NEXT」です。SRE NEXT 2023で掲げた価値観 Diversity、Interactivity、Empathyを大切にしつつ、SREの担う幅広い技術領域のトピックや組織、人材育成に対してディスカッションやコミュニケーションを通じて、新たな知見や発見を得られる場にします。 Home | SRE NEXT 2025 開催概要 開催日:2025年7月11日(金)・12日(土) 会場:TOC有明及びオンライン 公式サイト: https://sre-next.dev/2025/ スポンサーセッションあります! DAY 2 (7/12) 13:00 - 13:20にTrack Bにて「ロールが細分化された組織でSREは何をするか?」というタイトルで長内がスポンサーセッションをする予定です。 細分化された組織の中においてロールが重なり合う中「自分たちは何をすべきか?」「SREとしての価値はどこにあるのか?」といった問いにSREチームがどのように向き合ってきたのかをお話しします。 詳細: https://sre-next.dev/2025/schedule/#slot081 ブース出展もします! ブースでは簡単に答えられるアンケートを用意しています。 ご回答いただくとガチャガチャが回せて、オリジナルノベルティーが当たりますので、ぜひブースに遊びにきてください! 当日はSREのメンバーもブースにいますので、SREについてTalkしましょう!
アバター
Introduction Hello, I am Nishida, a member of the payment platform development team at KINTO Technologies. In this article, I'd like to share how we used AWS SAM to build the backend for an internal payment operations system, which was also introduced earlier in this article . What is AWS SAM? First off, AWS SAM (Serverless Application Model) is a tool that makes it easy to build and deploy serverless services like Lambda and API Gateway. With AWS SAM, developers no longer need in-depth knowledge of infrastructure and can focus on building applications using a serverless architecture. Why We Chose AWS SAM Right after joining KINTO Technologies, I became involved in developing a payment operations system. Given the short development timeline of just 2 to 3 months, we needed to select backend technologies that supported rapid iteration Since it was an internal system with limited traffic, we decided to go with AWS SAM, leveraging my prior experience with it from a previous role. How to Use AWS SAM I'd like to use AWS SAM to build a REST API using API Gateway and Lambda in a serverless setup. Here's what the directory structure looks like: . ├── hello_world │ ├── __init__.py │ └── app.py └── template.yaml First, install AWS SAM from the official documentation . AWS SAM uses a file called a template to manage AWS resources. AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > sam-app Sample SAM Template for sam-app Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: FunctionName: HelloWorldFunction CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.9 Events: HelloWorld: Type: Api Properties: Path: /hello Method: get import json def lambda_handler(event, context): body = { "message": "hello world", } response = { "statusCode": 200, "body": json.dumps(body) } return response We deploy using the sam command. This time, I'll try deploying interactively using the --guided option. sam deploy --guided Enter the stack name, region, etc. Stack Name [sam-app]: # デプロイするスタック名を入力 AWS Region [ap-northeast-1]: # デプロイするリージョンを入力 #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: # 変更内容を確認するかを入力 #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: # SAM CLI が IAM ロールを作成するかを入力 #Preserves the state of previously provisioned resources when an operation fails Disable rollback [y/N]: # ロールバックを無効にするかを入力 HelloWorldFunction may not have authorization defined, Is this okay? [y/N]: # Lambda に対する認可を設定するかを入力 Save arguments to configuration file [Y/n]: # 設定を保存するかを入力 SAM configuration file [samconfig.toml]: # 設定ファイルの名前を入力 SAM configuration environment [default]: # 環境名を入力 Once the deployment is complete, check the Lambda console to confirm that HelloWorldFunction has been created. You can also find the endpoint by selecting the API Gateway that triggers Lambda. Let's try sending a request using curl. curl https://xxxxxxxxxx.execute-api.ap-northeast-1.amazonaws.com/Prod/hello If the request is successful, you'll get a response like this: {"message": "hello world"} After Trying It Out As I had prior experience with AWS SAM, I was able to get the basic infrastructure up and running in just a day, which helped us stay on track with the development schedule. Once you're familiar with it, one of the best things about AWS SAM is how easy it makes building APIs in a serverless setup. In addition to API Gateway and Lambda, we also use AWS SAM to build EventBridge and SQS, which are used for periodic processing such as batch processing. The official documentation has also improved a lot, which I think has lowered the barrier to getting started. Conclusion In this article, I shared how we quickly built the backend for a payment operations system from scratch using AWS SAM. Since it's a tool provided by AWS, it has high compatibility, reduces the overhead of environment setup, and allows you to focus more on actual development. If you're interested, I highly recommend giving it a try.
アバター
Introduction Hello. My name is Shimamura , and I used to be a DevOps engineer in the Platform Group, but now I'm on the Operation Tool Manager team within the same gorup, where I'm responsible for Platform Engineering and tool-related development and operations. KINTO Technologies' Platform Group promotes IaC using Terraform. We define design patterns that are frequently used within the company and provide them as reference architectures, and each environment is built based on those patterns. For the sake of control, each environment from development to production is built upon ticket-based requests. Before building the development environment, we prepare a sandbox environment (AWS account) for the application department's verification. However, this is often built manually and there are many differences with the environment built by Platform Group. If a design pattern were available, the environment could be automatically built upon developer request, which would eliminate the waiting time between the request and the creation of the environment, and improve development efficiency. I think this kind of request-based automated building is a common requirement in DevOps, but it seems that Kubernetes is still the most commonly used application execution platform. KINTO Technologies uses Amazon ECS + Fargate as its application execution platform, so I would like to introduce this as a (probably) rare example of automated environment building for ECS. Background Challenges The system is not around when application developers need it (during verification/launch) As part of the DevOps activities, I researched AutoProvisioning (automated environment building) and felt that it was common, but it is not present within our company. There is a large difference between an environment built in a sandbox environment with a relatively high degree of freedom and an environment built according to the design patterns provided by Platform Group. IAM permissions and security Presence of common components such as VPC/Subnet/NAT Gateway etc. As a result, the communication costs becomes higher for both parties when requesting a build. Solution Why not create an automated building mechanism? Since this is a design pattern, there are some AWS services that may be missing, but it's tolerable and presumably they will be added manually. As a first step, it's worthwhile to automatically build an environment on AWS in about an hour so that you can check the operation of your application and prepare for CICD. Let's Make It Thankfully, Terraform is becoming more modular so we can build environments in a variety of patterns by simply writing a single file (locals.tf), so I think of the below as a base: Used in-house created modules (Must) Built with in-house design patterns as a base (Must) Made sure that DNS is automatically configured and communication is possible via HTTPS. It should be able to automatically generate locals.tf Prototyped the application to see if it can be structured and generated using Golang's HCLWrite After prototyping, I found that structuring was difficult, so I eventually gave up on automatic generation. I took care of it by replacing some parameters from the template file Since the process was about replacing, detailed settings for each component are not possible. The Final Result From the GUI on the CMDB select product design patterns When you select this and click Create New, the specified configuration will be built in the sandbox environment of the department associated with the product in 10 to 40 minutes (depending on the configuration). Overall Configuration Individual Explanation I separated the part that creates the Terraform code from the part that actually builds it in the sandbox environment so that they could be tested separately. Terraform Code Generation Parts ProvisioningSourceRepo Issue management GitHub Actions execution Terraform code for the created sandbox environment CIDR list for each sandbox environment ProvisioningAppRepo Template for design pattern Yaml (buildspec.yml) in CodeBuild Various ShellScripts running on CodeBuild InfraRepo TerraformModule AWS Environment Building Part S3 Source and Artifact in CodePipeline EventBridge CodePipeline Trigger CodePipeline/CodeBuild Actual construction environment Route53 (Dev) Delegate authority from the production DNS and use Route53 in the Dev environment Terratest (Apply) The Terratest sample looks like this. The test is nested so that if any of the Init, Plan, or Apply steps fail, the test will end. If the Apply step fails midway, Destroy what was applied up to that point. I think you will be able to write it more neatly if you have knowledge of Golang. package test import ( "github.com/gruntwork-io/terratest/modules/terraform" "testing" ) func TestTerraformInitPlanApply(t *testing.T) { t.Parallel() awsRegion := "ap-northeast-1" terraformOptions := &terraform.Options{ TerraformDir: "TerraformファイルがあるPATH" + data.uuid, EnvVars: map[string]string{ "AWS_DEFAULT_REGION": awsRegion, }, } // InitでErrorがなければPlan、PlanでErrorがなければApplyと // IFで入れ子構造の対応を実施(並列だとInitで失敗してもテストとしてすべて走る) if _, err := terraform.InitE(t, terraformOptions); err != nil { t.Error("Terraform Init Error.") } else { if _, err := terraform.PlanE(t, terraformOptions); err != nil { t.Error("Terraform Plan Error.") } else { if _, err := terraform.ApplyE(t, terraformOptions); err != nil { t.Error("Terraform Apply Error.") terraform.Destroy(t, terraformOptions) } else { // 正常終了 } } } } Elements Name Overview CMDB (in-house production) Configuration Management Database to manage databases Since rich functions were unnecessary, KINTO Technologies has developed an in-house CMDB. On top of that, we are creating a request form for automatic building. In addition, after being built, FQDN and other information are automatically registered in the CMDB. Terraform A product for coding various services, AWS among them. IaC. In-house design patterns and modules are created with Terraform. GitHub A version control system for storing source code. Build requests are logged by raising an Issue. Also, since Terraform code is required for deletion, etc., we also save each code for the sandbox environment. GitHubActions The CI/CD tool included in GitHub. At KINTO Technologies, we utilize GitHub Actions for tasks such as building and releasing applications In this case, we are using the issue filing as a trigger to determine whether to Create/Delete, select the necessary code group, compress it, and connect to AWS. CodePipeline/CodeBuild CICD-related tools provided by AWS. Using it to run Terraform code. We could run Terraform/Terratest on GitHubActions, but since we use GitHubActions daily for application builds, we chose to use this to avoid the impact on each product team due to usage limits, etc. Terratest A Go library for testing infrastructure code, etc. You can also test modules, but in this case we are using it to recover from failures in the middle of Terraform Apply. Click here for the official site Restrictions We target multiple sandbox environments (AWS accounts) associated with each development team, but only one can be created at a time (exclusive). Since CodePipeline/CodeBuild are running in the same environment due to DNS We also create parts that are not run in the application. It may seem like there is a lot of waste, but this is due to the build design pattern. It is built as a seamless line from FQDN to DB. You need to set the VPC, etc. in the Module beforehand. You need to build a set of common components such as VPC beforehand. What to Do if There Are No Modules KINTO Technologies has been working on design patterns for some time, so we have the advantage of being able to easily use Terraform to build everything from CloudFront to RDS. What can you do if you haven't progressed that far but still want to implement AutoProvisioning using ECS? I Thought About It Create up until the ECS Cluster in advance. ECS Service ECR Repository ALB TargetGroup ALB ListenerRule IAM Role Route53 I think it would be easier to prepare a Terraform file with the above, and then build it. TaskDefinition can be created if you have permission, so it's up to the user. Configuration Proposal I think CodePipeline/CodeBuild would be fine instead of GitHubActions, but when you consider the need to prepare a GUI like CodeCommit, wouldn't it be easier to just put it all together on GitHub? So, here is the configuration. I haven't used AWS Proton yet, so I haven't considered it. I think it would be possible to separate the Parameter parts such as locals.tf and create them using the sed command or Golang's HCL library. Once you have confirmed the build using Terratest, etc., add any FQDN to the ALB alias and match it with the ListenerRule. Next Steps Originally, we had hoped to offer it in advance to get feedback, but at present it hasn't been used much. We have provided a GUI for this purpose, and we plan to start by having a variety of people use it and receive feedback. However, I think there are many things we can do, such as increasing the number of compatible design patterns and simplifying the associated CICD settings. I would really like to introduce Kubernetes and then move on to AutoProvisioning, which has many applications. |・ω・`) Is that not possible? Impressions To be honest, I tried hard to automatically generate templates using Golang, but gave up because the HCL structure of our in-house design patterns was difficult to analyze and reconstruct. There was some talk internally about this being a reinvention of the console, but if we could get that far, I think we might be able to automate not only the sandbox environment but also the STG environment. For Platform Group, the environment can be created simply by tapping and selecting a few items on the GUI. It's really simple. To be honest, I wanted to reach that level, but I think it was good that I was able to take even the first step. In Kubernetes, I think it might be possible to create something similar by preparing a Helm chart as a template. I would like to consider alternative methods and try various things. Summary The Operation Tool Manager Team oversees and develops tools used internally throughout the organization. As I wrote in my previous O11y article , we organize the mechanisms and present them to application developers so that they can use them on a self-service basis, supporting the creation of value by these developers. A PlatformEngineering meet up was held a little while ago, and it's reassuring to know that this is in line with the direction we're moving forward in. The Operation Tool Manager team also has an in-house tool building department, allowing developers to quickly and intensively create value for their applications. Please feel free to contact us if you are interested in any of these activities or would like to hear from us. @ card
アバター
Introduction Hello! I'm Tanachu from the Security & Privacy Group at KINTO Technologies! I usually work on log monitoring and analysis using SIEM, building monitoring systems, and handling cloud security tasks as part of some projects in the SCoE group (here you can read about what's the SCoE group? ) . Here is my self-introduction. In this article, we share a report on our visit to the " Sysdig Kraken Hunter Workshop ," held on March 26, 2025, at the Collaboration Style event space near Nagoya Station. The Event Space Using Sysdig Secure at KINTO Technologies At KINTO Technologies, we mainly use Sysdig Secure for Cloud Security Posture Management (CSPM) and Cloud Detection and Response (CDR). I've put together more details in this blog, so feel free to take a look. A Day in the Life of a KTC Cloud Security Engineer What is the Sysdig Kraken Hunter Workshop? Sysdig is a company founded by Loris Degioanni, the co-developer of the well-known network capture tool, Wireshark. It offers security solutions for cloud and container environments, built around Falco , an open-source standard for cloud-native threat detection developed by Sysdig. We use Sysdig Secure to monitor cloud activities such as permission settings and account or resource creation in cloud environment. The Sysdig Kraken Hunter Workshop is a hands-on session where you run simulated attacks on a demo Amazon EKS environment. You go through a series of modules using Sysdig to practice detection, investigation, and response. If you pass the post-workshop exam, you'll earn a Kraken Hunter certification badge. In this blog, I'll walk you through three modules that stood out the most. Module 1: Simulated Attack and Event Investigation In this module, we carried out a simulated attack on a demo Amazon Elastic Kubernetes Service (Amazon EKS) environment and used Sysdig Secure to detect and investigate the event. First, following the provided documentation, we simulated a remote code execution (RCE) attack on the Amazon EKS demo environment. The simulated actions included: Reading, writing, and executing arbitrary files on the system Downloading files onto the system After running the simulated attack, we accessed the Sysdig Secure console via browser. By checking the status of the targeted resources, we could confirm that Sysdig had detected events related to the attack. Reference: sysdig-aws workshop-instructions-JP Digging deeper, we confirmed that Sysdig Secure had picked up the simulated attack in real time. Reference: sysdig-aws workshop-instructions-JP This hands-on flow let us try out a simulated attack and see exactly how Sysdig Secure handles detection and investigation through its console. By running the attack myself and going through the investigation process with Sysdig Secure, I felt like I got a solid understanding of what the tool is capable of. Module 2: Host and Container Vulnerability Management In this module, we explored Sysdig Secure's features for managing vulnerabilities in both hosts and containers. Since our own products use containers and follow a microservices architecture, this topic is especially relevant to us. Sysdig Secure offers several types of vulnerability scans: Runtime Vulnerability Scanning, Pipeline Vulnerability Scanning, and Registry Vulnerability Scanning. The Runtime Vulnerability Scan lists all containers that have run in your monitored environment in the past 15 minutes, along with all hosts/nodes that have the Sysdig Secure Agent installed. Resources are automatically sorted by severity based on the number and risk level of vulnerabilities, making it easy to spot what needs your attention first. Reference: sysdig-aws workshop-instructions-JP You can also click on any listed item to drill down and view vulnerability details. Reference: sysdig-aws workshop-instructions-JP Pipeline Vulnerability Scan checks container images for vulnerabilities before they're pushed to a registry or deployed to a runtime environment. Registry Vulnerability Scan targets images already stored in your container registry. This way, you can check for vulnerabilities at each phase of the container image lifecycle, from development to production. There are plenty of security tools out there for vulnerability management, but the Sysdig Secure console stood out to me for its sophisticated UI and intuitive usability. Module 3: Container Posture & Compliance Management In this module, we experienced how Sysdig Secure helps manage posture and compliance in cloud environments. As you may have probably seen or heard in the news, misconfigurations in the cloud are a major cause of security incidents. Since we build our products in a fully cloud-native setup, this isn't just others problem—it's something we take seriously. That's why this feature caught our attention. As a posture and compliance management feature, Sysdig Secure allows you to check if your environment complies with common standards like CIS, NIST, SOC 2, PCI DSS, and ISO 27001. Reference: sysdig-aws workshop-instructions-JP It also highlights non-compliant resources and shows you how to fix them. While it's hard to say whether the steps will be practical in every situation, having that guidance readily available does save workloads on researching fixes. As an admin, that's a huge plus. Reference: sysdig-aws workshop-instructions-JP Kraken Hunter Certification Exam The Kraken Hunter certification exam had about 30 to 40 questions on a dedicated web page. The questions covered topics from the workshop, so if you paid attention, you had a solid shot at passing. I struggled a bit with some of the finer details introduced at the start of the workshop, but I managed to pass the exam! Here's the certification badge awarded to those who pass: Kraken Hunter Certification Badge Using Sysdig Secure Going Forward We're exploring and pushing the following ways to get the most out of Sysdig Secure: CSPM: Creating custom policy rules in Rego based on our governance framework to ensure cloud security that aligns with our internal policies. CDR: Building custom rules using Falco to expand threat detection tailored to our environment. CWP: Testing and implementing Cloud Workload Protection (CWP) to secure our container workloads. Summary In the Sysdig Kraken Hunter workshop, we conducted a simulated attack against an Amazon EKS demo environment and got hands-on with Sysdig Secure—detection, investigation, response, and more. Since we've only used a limited set of Sysdig Secure's features at our company, most of what was introduced was new to us. While we fumbled a bit at first, it was a great chance to see what the tool is truly capable of. Joining the in-person workshop also gave us the chance to hear real stories from other companies—their challenges and efforts in the field. Big thanks to the organizers for making this happen. Conclusion Our Security and Privacy Group, along with the SCoE group who joined this workshop, are looking for new teammates. We welcome not only those with hands-on experience in cloud security but also those who may not have experience but have a keen interest in the field. Please feel free to contact us. For more information, please check here .
アバター
はじめに こんにちは! 新車サブスク開発G、Osaka Tech Lab 所属の high-g( @high_g_engineer )です。 今回は、社内で立ち上げたフロントエンド勉強会について紹介します。 開催経緯 ある日、新車サブスク開発Gの上長との1on1で、 TSKaigi 2024のタイムテーブル を見せる機会がありました。 それを見た上長からは「幅広いテーマが扱われていて、非常にいい教材だね。これを使って、各部署のフロントエンドエンジニア同士で知見を共有し、横のつながりを作る勉強会があったらいいね」という前向きな提案をもらいました。 その言葉をきっかけに、参加意欲の高そうなフロントエンドエンジニアを募り、社内勉強会の企画がスタートしました。 勉強会の目的 この勉強会では、以下の3つを目的に掲げています。 学習 :外部カンファレンスの発表内容やWeb標準仕様を共有、議論を通じて理解を深める 実践 :学んだ内容を実際のプロダクトに適用する 共有 :実践から得た知見や課題を参加者間で共有し、組織全体のナレッジとして蓄積する 単に知識を得て終わるのではなく、「実際に業務で使える状態になる」ことを目指した実践的な勉強会です。 週に1回、1時間の枠を確保し、読み合わせ・モブプログラミング・ハンズオンなど、内容に応じた形式で実施しています。 主なテーマと発展経緯 2024年9月30日から現在までに34回開催し、以下のような内容を取り組みました。 カンファレンスや技術イベントの知見共有(第1〜17回) まずは、勉強会を定着させるために、 TSKaigi 2024 や JSConf JP 2024 の発表内容を中心に、フロントエンド技術の最新動向を参加者で学習しました。 Prettierの未来を考える (第1回):コードフォーマッターの今後の方向性 TypeScriptのパフォーマンス改善 (第2回):実際の業務コードでの課題と照らし合わせ 全てをTypeScriptで統一したらこうなった! (第3回):フルスタック開発事例 TypeScript化の旅: Helpfeelが辿った試行錯誤と成功の道のり (第5回) TanStack Routerで型安全かつ効率的なルーティング (第7回) Storybook駆動開発 UI開発の再現性と効率化 (第9回) TypeScriptで型定義を信頼しすぎず「信頼境界線」を設置した話 (第10回) mizchiさんによる「LAPRAS 公開パフォーマンスチューニング」 (第12〜13回):外部事例から学ぶパフォーマンス改善 Yahoo! JAPANトップページにおけるマイクロフロントエンド (第15回):大規模組織での開発事例 JavaScript のモジュール解決の相互運用性 (第16回) You Don't Know Figma Yet - FigmaをJSでハックする (第17回) チーム間知見共有と相互理解(第18〜24回) 参加者が増えたことで、個人のスキル共有や各部署のフロントエンドチームの状況共有を実施しました。 これにより、今後の勉強会の方向性を検討するとともに、各チームのプロジェクトをコードベースで確認し、KINTOテクノロジーズ全体のフロントエンド課題を共有できました。 ここまでのふりかえり (第18回):勉強会の方向性を議論 自己紹介 〜これまでのキャリアを添えて〜 (第19回):メンバー間の相互理解促進 各チームのフロントエンド開発状況共有会 (第20〜24回):5回シリーズで各チームの技術スタック、課題、取り組みを詳細に共有 Web標準の理解と実践(第25〜28回) ふりかえり会で、Web仕様を理解したいという声があったため、Baselineを全員で掘り下げながら理解する取り組みをしました。 https://web.dev/baseline?hl=ja Baselineの理解 (第25〜27回):3回シリーズでWeb標準について体系的に学習 Baselineの振り返り&次回以降やること議論 (第28回):学習内容の整理と今後の方向性検討 実践的なパフォーマンス改善(第29回〜現在) mizchiさんの公開パフォーマンスチューニングの動画 を参考にし、実際のプロダクトのパフォーマンスチューニングを実践しました。 https://www.youtube.com/watch?v=j0MtGpJX81E FACTORYパフォーマンス改善 (第29〜30回):2回シリーズで具体的な改善施策と結果を共有 TSKaigi 2025 登壇内容シェア会 (第31回):社内メンバーの登壇内容を事前共有 KINTO ONE パフォーマンス改善 (第32〜34回):3回シリーズでモブプログラミング形式により全員で実際の改善作業を実施、最も実践的な学習を実現 継続的な開催による成果 参加者数の拡大と定着 当初5名から始まった勉強会は、継続開催を通じて、コンスタントに10名以上が参加する勉強会へと成長しました。参加者は新車サブスク開発Gだけでなく、他のグループからも参加者が増えてきて、横の繋がりを実現できています。 初期メンバーの多くが現在も参加を続けており、新しく参加したメンバーも定着率が高いことから、勉強会が参加者にとって価値のある時間になっていることが伺えます。 組織的な技術力の段階的向上 普段、カンファレンスや技術イベントで得た知識は、個人視点での学習範囲にとどまりがちですが、各部署にいる異なる課題感を持ったフロントエンドエンジニアたちで同時に学ぶ事により、組織全体にメリットがあるだけでなく、個人では気づきにくい観点からの学びにもつながっています。 BaselineシリーズでWeb標準について体系的に学習した継続参加メンバーは、これまで「なんとなく」使っていた技術について仕様レベルで理解できるようになり、業務での技術選択においてもより根拠を持った判断ができるようになっています。 実践的な問題解決能力の獲得 直近の勉強会では、KINTO ONE や KINTO FACTORY などの実際のプロダクトを対象に、モブプログラミング形式でパフォーマンス改善に取り組みました。 勉強会で得た知見をそのままプロダクトに適用し、実際に手を動かして確認することで、パフォーマンスチューニングへの苦手意識を払拭。売上向上にもつながる、実践的かつ価値のある取り組みとなりました。 チーム間の技術的連携の深化 開発状況の共有会を通じて、これまで把握しきれていなかった他チームの技術的な取り組みや課題を知る機会が増えました。 その結果、類似した課題を持つチーム同士が勉強会後に個別で相談し合うケースも増加しています。 勉強会は、単なる学習の場にとどまらず、実際の業務課題を解決するためのハブとしても機能し始めています。 まとめ 社内のフロントエンドエンジニア同士の横のつながりを強化する目的で始まったこの勉強会は、外部カンファレンスの知見共有からスタートし、現在では実プロダクトを題材にしたパフォーマンス改善へと発展しています。 今後も継続開催を通じて、つながりを育みながら、組織全体の技術力向上に貢献する場を目指していきます。
アバター
Introduction Hello! My name is K, and I work as a designer at KINTO Technologies. I usually work mainly on UI/UX design for e-commerce sites, but sometimes I also have the opportunity to work on communication design. Back in November 2024, I was responsible for designing the logo that became the face of our internal event, the CHO All-Hands Meeting! Since this was a special opportunity, I would like to take this opportunity to casually introduce the behind-the-scenes aspects of the production and the points I paid particular attention to. What is a Logo? A logo is more than just decoration; it serves as the face of a brand or event. At a glance, people can recognize it and think, "Oh, that's the event!" or "Hey, I've seen this before!" It plays a big role in shaping the impression it leaves on them. Instead of just creating something because it seems cool, I ask myself, "What kind of vibe will this event have?" "What message is it trying to send?" "What impression do I want people to walk away with?" It was a good reminder of how important it is to design with intention. Defining the Concept: Understanding the Core of the Event First, I had a chat with the art director in charge of the event's overall artwork. As we clarified the event's purpose and key message, I started shaping the concept behind the logo. The concept of the CHO All-Hands Meeting event was "initiative" and "connection." A space where everyone proactively connects with their colleagues A place where we truly feel that our company encourages taking initiative An energetic, lively atmosphere with colleagues that fuels our motivation for the next challenge I wanted to capture all of that through a design that feels "free and fun," and had "an energetic vibe that brings people together." Exploring Design Directions The next phase was to find out what kind of visuals would fit the concept. How do I express "a design that feels free and fun," and "an energetic vibe that brings people together."? When I thought about this, one theme came to mind: "otaku culture x technology." The event draws lots of people from development and creative teams. For many of them, things like anime, games, mecha, and manga are not only familiar, but also genuinely exciting. By blending that with a futuristic tech vibe, I felt we could create a world that feels even more open, energetic, and full of positive momentum. Some visual elements we considered were: Elements of mecha, robots, and tokusatsu-inspired details : To bring in that mechanical, industrial edge. Manga and comic-style elements : To experiment with bold, energetic lettering and speech bubble shapes. Digital-style typography : To add a subtle futuristic vibe. I started by sketching out rough ideas by hand, letting the concepts flow freely from there. From Sketch to Digital Once the direction became clear from the sketches, I moved into Illustrator to start digitizing the design. Cleaned up the rough drafts and created the base shape. From there, made several variations, each with subtle tweaks in nuance. Discussed with the art director which design best embodies the concept. Of course, plenty of ideas didn't make the cut—but going through that trial-and-error process really reminded me how essential it is for creating great design. Polishing the Details Once the rough draft was locked in, it was time to move into the final phase. At this stage, I kept refining things, tweaking the details until everything felt just right. And, one of the key elements that really shapes the impression of a logo is the font. For example, rounded fonts can give off a soft, friendly vibe, while sharper fonts feel more sleek and polished. Even small differences like that can completely change the overall tone of the logo. This time, instead of relying on existing fonts, I created an original font. While keeping the event's core themes—autonomy and interaction—in mind, I made adjustments with a focus on the following points: Improve readability : Adjust letter width, proportions, and spacing to make the text easier to read and give the overall design a cohesive feel. Refine curves : Reduce the number of paths to create a smoother, more polished shapes. Harmony between kanji and katakana : Be mindful of consistent shapes so the characters would feel balanced when placed together. When compared to the original red guidelines, the final shape has changed significantly. This process really reminded me how even the smallest design choices, like font style and tiny shape tweaks, can greatly affect the impression a logo gives. Finalizing the Logo: Balancing Playfulness and Versatility And finally—the logo was complete! It strikes a nice balance between playful vibes and practical versatility, making it easy to use in all kinds of contexts. The mix of subculture and tech came through naturally in the design. By sticking to the shapes and fonts, the overall finish and quality really leveled up! Summary Logo design is something where even the tiniest details can totally change the impression it gives. This project reminded me how important it is to keep pushing until you hit that "This is it!" moment. This time, I think I managed to go beyond just making "something that looks cool." I created a design that really "captures the spirit of the event and works across different contexts." If this article sparks even a little inspiration or insight for someone out there, I'll be happy!
アバター