TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Hello Hello, I'm Maya from the Tech Blog team at KINTO Technologies! I interviewed those who joined us in October 2023 about their immediate impressions of the company and summarized them in this article. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. IU Self-introduction I am IU from KINTO ONE New Vehicle Subscription Development Group. I am in charge of front-end development such as membership screens and contract simulation screens for KINTO ONE. How is your team structured? We are basically a front-end team, with six people. I am participating with Mr. Shirahama, who joined the company at the same time. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Before joining the company, I anticipated it to be somewhat old-fashioned and rigid, given it is a subsidiary of a major corporation. However, I've found the working atmosphere to be more relaxed than expected. In particular, I feel a strong sense of speed, with services rapidly improving in short cycles from idea planning to implementation and release in a short period of time. What is the atmosphere like on site? Even though each person concentrates on their individual tasks and often works independently, we come together at meetings to share updates on the tasks each person is handling. When faced with challenges, we engage in group discussions to find solutions, and we conduct periodic reading sessions to enhance our knowledge. How did you feel about writing a blog post? I knew about the Tech Blog before I joined the company, but I thought that only a limited number of people could write for it. However, the environment was inclusive, and everyone was not only welcome but also encouraged to write. I enjoy writing articles, so I would like to actively participate in Tech Blog. Jongseok Bae Self-introduction My name is Jongseok Bae from South Korea who joined the company in October. I am an Android developer in the Prism Team at the Mobile App Development Group in the Platform Development Division. How is your team structured? The Prism team manages schedules and meetings using the Agile framework. What was your first impression of KINTO Technologies when you joined? Were there any gaps? At the beginning, I felt that the company was well explained through on-the-job training and other activities. I thought it was different from my previous experiences where companies typically provided only a brief explanation for about a day. I also felt that many in-house study sessions are held and it is really nice to catch up on information that might otherwise be easily missed. What is the atmosphere like on site? The team members are kind. I found it to be a good work environment where communication with others allowed me to ask questions, learn, and share opinions in the course of work. How did you feel about writing a blog post? It was burdensome at first, but as I reflected on what I felt over the month, I realized it's a great way to organize my thoughts. Martin Hewitt Self-introduction I'm Martin from France. I'm involved in Platform Engineering at KINTO Technologies. How is your team structured? The Platform Group consists of six teams, each specializing in a particular area of expertise, including SRE, DBRE, Platform Engineering, and Cloud. What was your first impression of KINTO Technologies when you joined? Were there any gaps? They faithfully introduced us to the company! This never happens in France. I felt that it was a modern company, which is different from the image I had of Japanese companies. What is the atmosphere like on site? Everyone is so kind! I was nervous at first, but I quickly got used to it. How did you feel about writing a blog post? How fast! U.A Self-introduction I am U.A from KINTO ONE Development Group. As a Producer and PdM (Product Manager), I am in charge of supporting the digital transformation of Toyota dealers. How is your team structured? I belong to the Digital Transformation Planning team within the Owned Media & Incubation Development Group. Our team directly visits Toyota dealers to ask about their problems and solve them with the power of IT. The team consists of six members in total: two producers, three directors, and one designer. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised by everything: freedom to dress casually, hair color freely, and flexible work hours. I feel like I am in an enriched space, drawing on my past experiences while collaborating with specialists from diverse fields in each department. So, the more people I get to know, the more I can stretch my abilities. What is the atmosphere like on site? The Digital Transformation Planning team is full of personalities and never runs out of topics to discuss. I feel how important communication is every day, as I often come up with hints for digital transformation from everyday conversations. How did you feel about writing a blog post? I am surprised that it has already been a month since I joined the company! I will continue to cherish each day. Ahsan Rasel Selp-introduction My name is Rasel from the Mobile App Development Group, Platform Development Division. I am from Bangladesh. I support the Android version of my route by KINTO app. How is your team structured? We are a 4-member multinational team, including myself. We are from Japan, Bangladesh and South Korea. Our team uses the Agile methodology for our workflows. What was your first impression of KINTO Technologies when you joined? Were there any surprises? At orientation, I was able to learn in detail about the company's structure, mission, and vision of all divisions. I had an orientation in my previous job, but it wasn’t as detailed. I found it much easier to get my opinions across to the CIO/CEO. What is the atmosphere like on site? Everyone is very kind and easy to work with. I had a lot of questions after I joined, but I'm grateful that everyone was happy to explain in detail. In case of any difficulties in Japanese, I can switch to English for smoother communication, which I find to be a good thing. How did you feel about writing a blog post? It feels fast. I have written on technical blogs before, but this is my first time to write a non-technical blog post. —a surprising task, especially considering how recent my joining of the company has been. Yuhei Miyazawa Selp-introduction I am Miyazawa from Operation System Development Group, Platform Development Division. In my previous job, I was developing e-commerce sites on the vendor side (Systems Integrator). Currently, I am developing a system to handle back office operations related to KINTO ONE used vehicles. How is your team structured? Development is being promoted by an in-house team of 5 people and approx. 20 vendors. In order to promote in-house production, many people with high technical expertise are employed, and here you will never find the mentality of "professionals only promote and rely on vendors for technology." What was your first impression of KINTO Technologies when you joined? Were there any surprises? Freedom and discretion in the way you work. Good communication culture with respect for others! What is the atmosphere like on site? It is not noisy, but not too quiet. It is a peaceful atmosphere. The team relationships are so good that if someone suggests, "Let's have a drink at that restaurant," it will happen. How did you feel about writing a blog post? I was relieved that the content was about self-introduction and company introduction. Ryomm Selp-introduction My name is Matsusaka / Ryomm (@ioco95) from the Mobile App Development Group, Platform Development Division. I am in the team that takes care of the iOS version of the my route by KINTO app. How is your team structured? The iOS development team of my route consists of six people, including myself. What was your first impression of KINTO Technologies when you joined? Were there any surprises? When I joined the company, my first impression was that it was rather conservative, but when I proposed ideas of what I wanted to do I found support from people around, and I’m now I find myself able to spend my time flexibly. What is the atmosphere like on site? There is plenty of work time, allowing me to work silently and concentrate on what I have to do. It feels refreshing to be able to come and leave at any time I want due to the full flex time Some of us start working at 5:00 a.m. on days when we work from home, and I truly feel a sense of freedom. How did you feel about writing a blog post? It's refreshing that the blog creates articles on GitHub. Pauline Ohlson Selp-introduction Hello! My name is Pauline Ohlson. Hello! My name is Pauline Ohlson. Starting October, I was assigned as an Android engineer to the Mobile Development Group in the Platform Development Division. How is your team structured? I am working in the Osaka office where my seating placement is together with the iOS engineers working in Osaka. Many of my Android project colleagues work in Tokyo so I collaborate with them from Osaka. What was your first impression of KINTO Technologies when you joined? Were there any gaps? My first impression was that KINTO’s history is interesting, and the future ambitions are inspiring to work for so I am very excited to join the company. I was also excited about the many company initiatives to use the latest tools and technologies. At KTC there are more occasions than I expected to get along with everyone. I was also happy to have the chance to talk directly to the CIO and the CEO. What is the atmosphere like on site? Everyone is very nice and works with passion while keeping a little bit of playfulness at the same time. Everyone is also very considerate towards each other which makes it easy to work effectively. How did you feel about writing a blog post? This is my first time to write a blog in this context; I think it is a cool and very fun idea! Hiroki Shirahama Self-introduction I am Shirahama from New Vehicle Subscription Development Group, KINTO ONE Development Division. I am in charge of the front end of KINTO ONE. How is your team structured? We have six team members including myself and IU, who both joined in October. What was your first impression of KINTO Technologies when you joined? Were there any gaps? I found it incredibly refreshing to have the freedom to choose my work hours with full flexibility. What is the atmosphere like on site? It's like silently moving forward with the task for which you are responsible. Since there are daily work reports and weekly reviews, I think it is an environment that makes it easy to understand what team members are working on and to consult about their tasks. How did you feel about writing a blog post? I have been curious about it, but I never thought I would write it so soon. Conclusion Despite being a request on short notice to write for the Tech Blog, thank you all for your willingness to share your impressions of immediately after joining the company! I hope that this article captures a new aspect of KINTO Technologies. I look forward to many more interesting contents from you in the future! :)
アバター
AWS CloudTrailに大量のNotFoundエラーイベントが出てるんですけど!? こんにちは。(今更)酒癖50を観てもお酒を嫌いになれなかったKINTO テクノロジーズCCoEチーム所属の栗原です。以前に同じチームの多田から KINTOテクノロジーズにおけるCCoEの活動内容 を紹介しましたが、クラウド環境をセキュアに保てるよう日々活動しています。AWSアカウントの健全性を確認するためAWS CloudTrailのログを分析していたところ、大量のNotFound系のエラーが定期的に発生していたことに気がつきました。地味な話になりますが、AWSを利用しているユーザーであれば同じ事象に遭遇しているはずなのにググってもヒットしなかったので調査内容をブログにしてみました。 結論 結論から言いますと、 AWS CloudTrailの分析時には、AWS Configレコーダーのサービスリンクロール経由のNot Found系エラーは除外して分析するべき。 になります。AWS Configの挙動上、どうしても発生してしまうエラーイベントが存在するので、適切にフィルタリングして分析ノイズを減らすことが可能です。 調査内容 KINTO テクノロジーズでは、 AWS マルチアカウント管理を実現するベストプラクティス に則り、AWS Control TowerでLanding Zoneを管理するマルチアカウント構成をとっています。そのため AWS Configで構成情報を、AWS CloudTrailで監査ログを管理しています。 AWSアカウントの健全性を確認するためAWS CloudTrailのログを分析していたところ、NotFound系のエラーイベントが大量かつ定期的に発生していることがわかりました。 とあるAWSアカウントの1ヶ月程度のCloudTrailログのAWS Athenaでの分析結果がこちらです。このアカウントは発行して最低限のセキュリティ設定を施したのみで、ワークロードは構築していないアカウントとなります。 -- errorCodeの上位を分析 WITH filterd AS ( SELECT * FROM cloudtrail_logs WHERE errorCode IS NOT NULL ) SELECT errorCode, count(errorcode) as eventCount, count(errorCode) * 100 / (select count(*) from filterd) as errorRate FROM filterd GROUP BY errorCode eventCount errorRate ResourceNotFoundException 1,515 18 ReplicationConfigurationNotFoundError 1,112 13 ObjectLockConfigurationNotFoundError 958 11 NoSuchWebsiteConfiguration 954 11 NoSuchCORSConfiguration 952 11 InvalidRequestException 627 7 Client.RequestLimitExceeded 609 7 -- 特定のerroCodeの発生頻度を確認 SELECT date(from_iso8601_timestamp(eventtime)) as "date" count(*) as count FROM cloudtrail_logs WHERE errorcode = 'ResourceNotFoundException' GROUP BY date(from_iso8601_timestamp(eventtime)) ORDER BY "date" ASC LIMIT 5 date count 2023-10-19 52 2023-10-20 80 2023-10-21 80 2023-10-22 80 2023-10-23 80 いくつかのerrorCodeをピックアップして、AWS CloudTrailのレコードを眺めると(実際のAWS CloudTrailログは記事の最後に記載します。)、アクセス元であるuserIdentityのarnフィールドに記録されているのは全て arn:aws:sts::${AWS_ACCOUNT_ID}:assumed-role/AWSServiceRoleForConfig/${SESSION_NAME} となっていました。これはAWS Configにアタッチされる サービスリンクロール です。対象リソースは存在するのにNotFoundになる理由がわからなかったのですが、 eventName の箇所を確認すると、リソース本体の構成情報を取得するAPIではなく、それぞれの従属するリソースの情報を取得するAPIであることがわかりました。 リソース errorCode 呼ばれていたAPI(eventName) Lambda ResourceNotFoundException GetPolicy20150331v2 S3 ReplicationConfigurationNotFoundError GetBucketReplication S3 NoSuchCORSConfiguration GetBucketCors ワークロードに影響があるエラーではないですが、通常の監視やトラブルシューティングのノイズになるため解消していきたいところですが、そのためには"関連リソースになにかしらの設定をする"(例えばLambdaのリソースベースポリシーに、自身のアカウントからのみInvokeFunctionのActionを許可する)といった、本質的ではない対応をする必要があります。 結果として、我々CCoEチームではAWS CloudTrailの分析時にAWS Configのサービスリンクロールからのアクセスは除外する。という対応する結論にいたりました。AWS Athenaで分析するのであれば以下の様なクエリを実行するイメージです。 SELECT * FROM cloudtrail_logs WHERE userIdentity.arn not like '%AWSServiceRoleForConfig%' 少しだけDeep Dive 本調査の過程でわかったAWS Configの構成情報の記録の挙動を少しDeep Diveします。公式ドキュメントにも明文化されていないが、本調査でわかったことが2点あります。 従属(補足)リソース(勝手に命名しました。)の記録の挙動 従属(補足)リソースの記録頻度 従属(補足)リソースの記録の挙動 AWS Configはリソース本体の構成情報を記録するだけでなく、関連リソース(relationship)も合わせて記録してくれる挙動があります。これらには、 「直接的な」関係 、 「関節的な」関係 と名前がつけられています。 AWS Config は、設定フィールドからほとんどのリソースタイプの関係を導き出します。これを「直接的な」関係と呼びます。直接的な関係は、リソース (A) と別のリソース (B) との間の一方向関係 (A→B) であり、通常、リソース (A) の Describe API レスポンスから取得されます。以前は、AWS Config が当初サポートしていた一部のリソースタイプについて、他のリソースの設定から関係もキャプチャし、双方向 (B→A) の「間接的な」関係を作成していました。例えば、Amazon EC2 インスタンスとそのセキュリティグループの関係は直接的です。セキュリティグループは Amazon EC2 インスタンスの Describe API レスポンスに含まれるためです。一方、セキュリティグループと Amazon EC2 インスタンスの関係は間接的です。セキュリティグループを記述しても、関連付けられているインスタンスに関する情報は返されないためです。その結果、リソース設定の変更が検出されると、AWS Configはそのリソースの CI を作成するだけでなく、間接的な関係を持つリソースを含む関連リソースの CI も生成します。例えば、Amazon EC2AWS Config インスタンスの変更を検出すると、そのインスタンスの CI と、そのインスタンスに関連付けられているセキュリティグループの CI が作成されます。 -- https://docs.aws.amazon.com/ja_jp/config/latest/developerguide/faq.html#faq-1 従属(補足)リソース と勝手に命名していますが、関連リソースとはまた別に、リソース本体の設定であるように見えるものの、取得APIも分かれているようなリソースがあります。Lambdaのケースでいうと、Lambda自体は GetFunction で取得できるリソースですが、 リソースベースポリシー はまた別のリソースで、 GetPolicy で取得できるリソースです。CI(Configuration Item)をみてみると、従属(補足)リソースであるリソースベースポリシーは以下の様に、 supplementaryConfiguration フィールドに記録されます。 { "version": "1.3", "accountId": "<$AWS_ACCOUNT_ID>", "configurationItemCaptureTime": "2023-12-15T09:52:19.238Z", "configurationItemStatus": "OK", "configurationStateId": "************", "configurationItemMD5Hash": "", "arn": "arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior", "resourceType": "AWS::Lambda::Function", "resourceId": "check-config-behavior", "resourceName": "check-config-behavior", "awsRegion": "ap-northeast-1", "availabilityZone": "Not Applicable", "tags": { "Purpose": "investigate" }, "relatedEvents": [], # 関連リソース "relationships": [ { "resourceType": "AWS::IAM::Role", "resourceName": "check-config-behavior-role-nkmqq3sh", "relationshipName": "Is associated with " } ], ... 中略 # 従属(補足)リソース "supplementaryConfiguration": { "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"test-poilcy\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::<$AWS_ACCOUNT_ID>:root\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior\"}]}", "Tags": { "Purpose": "investigate" } } } 従属(補足)リソースの記録頻度 AWS ConfigのCIの記録頻度は、 RecordingMode の設定に従いますが、従属(補足)リソースについてはその限りではないようです。NotFound系だった場合リトライしている可能性もありそうですが、12時間や24時間に1回記録を試みているような動作になっていました。これも従属(補足)リソースの種類によって規則性があるわけではないようです。なかなかにブラックボックスな挙動ですがこの様な調査結果となりました。 まとめ 以上、AWS CloudTrailに出力されている謎のNotFound系エラーイベントの正体と、対策について紹介しました。今後詳細を調査予定ですが、Macieのサービスリンクロールからも同じ様なエラーイベントが発生していることが確認できています。AWS CloudTrailの分析は退屈な作業ではありますが、AWSサービスの挙動を深く理解できる機会にもなるので、積極的に実施していきましょう!AWSを使い倒したいエンジニアの方、小出恵介さんってやっぱいい俳優だよね!という方、 プラットフォームG で絶賛採用募集中です! 最後にそれぞれのAWS CloudTrailエラーイベントを記載して終わりにします。ご拝読ありがとうございました。 Lambda: ResourceNotFoundException { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "************:LambdaDescribeHandlerSession", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/LambdaDescribeHandlerSession", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*********", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*********", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "webIdFederationData": {}, "attributes": { "creationDate": "2023-12-03T09:09:17Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T09:09:19Z", "eventSource": "lambda.amazonaws.com", "eventName": "GetPolicy20150331v2", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ResourceNotFoundException", "errorMessage": "The resource you requested does not exist.", "requestParameters": { "functionName": "**************" }, "responseElements": null, "requestID": "******************", "eventID": "******************", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "eventCategory": "Management" } S3: ReplicationConfigurationNotFoundError { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "**********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketReplication", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ReplicationConfigurationNotFoundError", "errorMessage": "The replication configuration was not found", "requestParameters": { "replication": "", "bucketName": "*********", "Host": "*************" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "**************", "bytesTransferredOut": 338 }, "requestID": "**********", "eventID": "*************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::***********" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-***********", "eventCategory": "Management" } S3: NoSuchCORSConfiguration { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "***********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "***************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketCors", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "NoSuchCORSConfiguration", "errorMessage": "The CORS configuration does not exist", "requestParameters": { "bucketName": "********", "Host": "*************************8", "cors": "" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "*********************", "bytesTransferredOut": 339 }, "requestID": "***********", "eventID": "*****************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::*************" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-********", "eventCategory": "Management" }
アバター
Spring Boot 2 to 3 Upgrade: Procedure, Challenges, and Solutions Introduction Hello. I am Takehana from the Payment Platform Team / Common Service Development Group [^1][^2][^3][^4] / Platform Development Division. This article covers the latest Spring Boot update which we use for payment platform APIs and batches. Challenges to Solve and Goals I Wanted to Achieve I am using Spring Boot 2, and I want to upgrade to 3 in consideration of the support period and other factors. The version of the library used was also upgraded Library Before migration (2) After migration (3) Java 17 No changes MySQL 5.7 8.0 Spring Boot 2.5.12 3.1.0 Spring Boot Security 2.5.12 3.1.0 Spring Boot Data JPA 2.5.12 3.1.0 hibernate Types 2.21.1 3.5.0 MyBatis Spring Boot 2.2.0 3.0.2 Spring Batch 4.3 5.0 Spring Boot Batch 2.5.2 3.0.11 Spring Boot Cloud AWS 2.4.4 3.0.1 Trial and Error and Measures Taken Method of Application We first updated and deprecated libraries that have little impact with existing code while referring to the official migration guide. After that, we updated to 3.1.0 and continued fixing, building, testing, and adjusting. Spring Boot 3.1 Release Notes Spring Boot 3.0 Migration Guide Spring Batch 5.0 Migration Guide javax → Jakarta We changed packages from javax , which affected many files, to Jakarta . The name after the package root did not change, so we replaced it mechanically. Around DB access MySQL-Connector-Java I changed to mysql-connector-j because it was migrated. Maven Repository MySQLDialect Using org.hibernate.dialect.MySQLDialect allows it to absorb MySQL version differences. Hibernate-Types The method of setting the Json type used with JPA Entity has changed with the upgrade. Change ID generate to IDENTITY The automatic numbering method has changed in Spring DATA JPA, and it now requires the table XXX_seq when using AUTO. Since our system used MySQL's Auto Increment, we decided not to use the numbering feature with JPA. Spring Batch Modifying the Meta Table The structure of the Spring Batch management table changed. The existing table was changed using the migration guide as a reference and using the following. /org/springframework/batch/core/migration/5.0/migration-mysql.sql However, just executing the ALTER TABLE caused an error during operation due to existing data, so we decided to restore the data to its initial state after confirming that it would not affect future operation. Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: LONG at org.springframework.batch.core.repository.dao.JdbcJobExecutionDao$2.processRow(JdbcJobExecutionDao.java:468) ... (The PARAMETER_TYPE of BATCH_JOB_EXECUTION_PARAMS contained a LONG value) The data was restored to its initial state with the following SQL. TRUNCATE TABLE BATCH_STEP_EXECUTION_CONTEXT; TRUNCATE TABLE BATCH_STEP_EXECUTION_SEQ; TRUNCATE TABLE BATCH_JOB_SEQ; TRUNCATE TABLE BATCH_JOB_EXECUTION_SEQ; TRUNCATE TABLE BATCH_JOB_EXECUTION_PARAMS; TRUNCATE TABLE BATCH_JOB_EXECUTION_CONTEXT; SET foreign_key_checks = 0; TRUNCATE TABLE BATCH_JOB_EXECUTION; TRUNCATE TABLE BATCH_JOB_INSTANCE; TRUNCATE TABLE BATCH_STEP_EXECUTION; SET foreign_key_checks = 1; INSERT INTO BATCH_STEP_EXECUTION_SEQ VALUES(0, '0'); INSERT INTO BATCH_JOB_EXECUTION_SEQ VALUES(0, '0'); INSERT INTO BATCH_JOB_SEQ values(0, '0'); BasicBatchConfigurer can no longer be used The method of use was changed to DefaultBatchConfiguration . StepBuilderFactory and JobBuilderFactory were deprecated JobRepository and TransactionManager are now passed with new StepBuilder() . The argument type of ItemWriter changed The write process was fixed after List changed to org.springframework.batch.item.Chunk . Before correction After correction ItemWriter<Dto> write() { return items -> { ... items.stream() .flatMap(dto -> dto.getDatas().stream()) .forEach(repository::update); ... ItemWriter<Dto> write() { return items -> { ... items.getItems().stream() .flatMap(dto -> dto.getDatas().stream()) .forEach(repository::update); ... The behavior of @EnableBatchProcessing changed When checking operations, the process was skipped with chunk model batches. The behavior of @EnableBatchProcessing changed. Spring Cloud AWS Library changes This system uses many AWS services, and Spring Cloud AWS was used to link them. With the update, io.awspring.cloud:spring-cloud-starter-aws was changed to io.awspring.cloud:spring-cloud-aws-starter (confusing), and com.amazonaws:aws-java-sdk was replaced with software.amazon.awssdk and fixed so it would operate. Spring Cloud AWS SES Regarding SES, because AmazonSimpleEmailService is no longer usable, we switched to JavaMailSender for the implementation. The JavaMailSender used is built with SES Auto Configuration and used with DI. SQS Objects for requests such as sending to SQS are made with the Builder pattern, so we fixed them accordingly. In addition, @NotificationMessage used in SQSListener is gone, so we created SqsListenerConfigurer and prepared MessageConverter. @Bean public SqsListenerConfigurer configurer(ObjectMapper objectMapper) { return registrar -> registrar.manageMessageConverters( list -> list.addAll( 0, List.of( new SQSEventModelMessageConverter( objectMapper, ReciveEventModel.class), ... } @RequiredArgsConstructor private static class SQSEventModelMessageConverter implements MessageConverter { private static final String SQS_EVENT_FILED_MESSAGE = "Message"; private final ObjectMapper objectMapper; private final Class<?> modelClass; @Override public Object fromMessage(Message<?> message, Class<?> targetClass) { if (modelClass != targetClass) { return null; } try { val payload = objectMapper .readTree(message.getPayload().toString()) .get(SQS_EVENT_FILED_MESSAGE) .asText(); return objectMapper.readValue(payload, targetClass); } catch (IOException ex) { throw new MessageConversionException( message, " Could not read JSON: " + ex.getMessage(), ex); } ... } S3 For uploads to S3, TransferManager was changed to S3TransferManager , and the implementation of issuing signed URLs needed to be fixed. SNS The sns:CreateTopic privilege was required with DefaultTopicArnResolver for sending SNS. We made it so TopicsListingTopicArnResolver can be used now, and CreateTopic permission is no longer needed. @ConditionalOnProperty("spring.cloud.aws.sns.enabled") @Configuration public class SNSConfig { @Bean public TopicArnResolver topicArnResolver(SnsClient snsClient) { return new TopicsListingTopicArnResolver(snsClient); } } Around the API WebSecurityConfigurerAdapter cannot be referenced We switched to a method that uses SecurityFilterChain referring to links. spring-security Making stricter URL paths Trailing Slash paths are now strictly differentiated. Since this system was linked with another internal system, we added a path to @RequestMapping , adjusted it with the peer system, and removed the added path. Before addition After addition ... @RequestMapping( method = RequestMethod.GET, value = {"/api/payments/{id}"}, ... ... @RequestMapping( method = RequestMethod.GET, value = {"/api/payments/{id}", "/api/payments/{id}/"}, ... Renaming properties (application.yml) Properties such as jpa, redis, and spring-cloud-AWS were renamed. We adjusted them according to the official information. Deployment There is a 404 with ECS deployment We reached the point where we could deploy to ECS and confirmed the launch in the application log, but when I accessed the API, there was a 404. I checked and saw that the Health Check failed the ECS deployment. With the help of our cloud platform engineers, we discovered that the AWS-opentelemetry-agent version used for telemetry data collection was outdated. By changing to a version of jar 1.23.0 or later, we can successfully deploy and check API communication. OpenTelemetryJava provided by AWS Results, additional knowledge and application proposals, next attempts, etc. There are places that did not have the general implementation pattern of Spring Boot which meet various requirements. The structure did not allow us to migrate easily with just migration guide sometimes. We managed to release it after repeated trial and error. I would like to thank the team for their continued work and reviews. We will continue to address the following remaining issues while also taking advantage of the features that were improved in Spring Boot 3. Swagger-UI We put off upgrading this. Since spring-fox is not yet compatible with Spring Boot 3, we are considering changing to springdoc-openapi. Spring Batch + MySQL 8.0 + DbUnit It results in an error if it meets some conditions. It seems that it is related to Spring Batch transaction management (meta table operation), and we are looking into how to fix it. Summary of the Lessons of This Article We were able to upgrade Spring Boot by repeating the build and test while referring to the migration guide. This update had a wide range of effects, but by preparing tests, we found out what we had to fix, and we were able to address them efficiently. We also found problems by running operations such as trying to change @EnableBatchProcessing, so we also had to check by running operations. Regarding JavaEE, with the change to Jakarta, we had to update the Spring Boot library and others. Security is stronger (trailing slash rules are stricter, AuthFilter can be used with Ignore Path, etc.). There were differences in the updates for each dependent library, and Spring Cloud AWS was especially different. We may have needed to make less changes if we had upgraded libraries more frequently. Thank you for reading This article. I hope that it will be useful for those who are also considering upgrading. [^1]: Posted by a member of the Common Services Development Group [ Domain-Driven Design (DDD) is incorporated into a payment platform with a view to global expansion ] [^2]: Posted by a member of the Common Services Development Group [ New Employees Develop a New System with Remote Mob Programming ] [^3]: Posted by a member of the Common Services Development Group [ Improving Deployment Traceability with JIRA and GitHub Actions ] [^4]: Posted by a member of the Common Services Development Group [ Building a Development Environment with VSCode Dev Container ]
アバター
Room Migration
Introduction Hello, I'm Hasegawa from KINTO Technologies. I usually work as an Android engineer, developing an application called "my route by KINTO." In this article, I will talk about my experiences with database migration while developing the Android version of my route by KINTO. Overview Room is an official library in Android that facilitates easy local data persistence. Storing data on a device has significant advantages from a user's perspective, including the ability to use apps offline. On the other hand, from a developer's perspective, there are a few tasks that need to be done. One of them is migration. Although Room officially supports automated database migration, there are cases where updates involving complex database changes need to be handled manually. This article will cover simple automated migration to complex manual migration, along with several use cases. What Happens If a Migration Is Not Done Correctly? Have you ever thought about what happens if you don't migrate data correctly? There are two main patterns, determined by the level of support within apps. App crashes Data is lost You may have experienced apps crashing if you use Room. The following errors occur depending on the case: When the database version has been updated, but the appropriate migration path has not been provided A migration from 1 to 2 was required but not found. Please provide the necessary Migration path When the schema has been updated, but the database version has not been updated Room cannot verify the data integrity. Looks like you've changed schema but forgot to update the version number. When manual migration is not working properly Migration didn't properly handle: FooEntity(). Basically, all of these can occur in the development environment, so I don’t think it will be that much of a problem. However, it should be noted that if the fallback~ described below is used to cover up migration failures, it may be very difficult to notice, and in some cases it may occur only in the production environment. What about "data loss"? Well, Room can call fallbackToDestructiveMigration() when you create the database object. This is a function that permanently deletes data if migration fails and allows apps to start normally. I'm not sure if this is to address the errors mentioned above or to avoid the time-consuming process of database migration, but I have seen it used occasionally. If this is done, data loss will occur in the event of a migration failure which is difficult to detect. Therefore, it is best to strive for successful migrations. Four Migration Scenarios Here are four examples of schema updates that may occur in the course of app development. 1. New Table Addition Since adding a new table does not affect existing data, it can be automatically migrated. For example, if you have an entity named FooClass in DB version 1 and you add an entity named BarClass in DB version 2, you can simply pass autoMigrations with AutoMigration(from = 1, to = 2) as follows. @Database( entities = [ HogeClass::class, HugaClass::class, // Add ], version = 2, // 1 -> 2 autoMigrations = [ AutoMigration(from = 1, to = 2) ] ) abstract class AppDatabase : RoomDatabase() {} 2. Delete or Rename Tables, Delete or Rename Columns Automated migration is possible for deletion and renaming, but you need to define AutoMigrationSpec . As an example of a column name change that is most likely to occur, suppose a column name of the entity User is changed to firstName . @Entity data class User( @PrimaryKey val id: Int, // val name: String, // old val firstName: String, // new val age: Int, ) First, define a class that implemented AutoMigrationSpec . Then, annotate it with @RenameColumn and give the necessary information for the column to be changed as an argument. Pass the created class to the corresponding version of AutoMigration and pass it to autoMigrations . @RenameColumn( tableName = "User", fromColumnName = "name", toColumnName = "firstName" ) class NameToFirstnameAutoMigrationSpec : AutoMigrationSpec @Database( entities = [ User::class, Person::class ], version = 2, autoMigrations = [ AutoMigration(from = 1, to = 2, NameToFirstnameAutoMigrationSpec::class), ] ) abstract class AppDatabase : RoomDatabase() {} Room provides additional annotations, including @DeleteTable , @RenameTable , and @DeleteColumn , which facilitate the easy handling of deletions and name changes. 3. Add a Column Personally, I think the addition of column is most likely to occur. Let's say that for the entity User , a column for height height is added. @Entity data class User( @PrimaryKey val id: Int, val name: String, val age: Int, val height: Int, // new ) Adding columns requires manual migration. The reason is to tell Room the default value for height. Simply create an object that inherits from migraton as follows and pass it to addMigration() when creating the database object. Write the required SQL statements in database.execSQL . val MIGRATION_1_2 = object : Migration(1, 2) { override fun migrate(database: SupportSQLiteDatabase) { database.execSQL( "ALTER TABLE User ADD COLUMN height Integer NOT NULL DEFAULT 0" ) } } val db = Room.databaseBuilder( context, AppDatabase::class.java, "database-name" ) .addMigrations(MIGRATION_1_2) .build() 4. Add a Primary Key In my app experience, there were cases where a primary key was added. This is the case when the primary key that was assumed when the table was created is not sufficient to maintain uniqueness, and other columns are added to the primary key. For example, suppose that in the User table, id was the primary key until now, but name is also the primary key and becomes a composite primary key. // DB version 1 @Entity data class User( @PrimaryKey val id: Int, val name: String, val age: Int, ) // DB version 2 @Entity( primaryKeys = ["id", "name"] ) data class User( val id: Int, val name: String, val age: Int, ) In this case, not limited to Android, the common method is to create a new table. The following SQL statement creates a table named UserNew with a new primary key condition and copies the information from the User table. Then delete the existing User table and rename the UserNew table to User . val migration_1_2 = object : Migration(1, 2) { override fun migrate(database: SupportSQLiteDatabase) { database.execSQL("CREATE TABLE IF NOT EXISTS UserNew (`id` Integer NOT NULL, `name` TEXT NOT NULL, `age` Integer NOT NULL, PRIMARY KEY(`id`, `name`))") database.execSQL("INSERT INTO UserNew (`id`, `name`, `age`) SELECT `id`, `name`, `age` FROM User") database.execSQL("DROP TABLE User") database.execSQL("ALTER TABLE UserNew RENAME TO User") } } Let's Check If The Migration Works Correctly! There are many more complex cases in addition to the migration examples above. Even in the app I am involved in, there have been changes to tables where foreign keys are involved. In such a case, the only way is to write SQL statements, but you want to make sure that the SQL is really working correctly. For this purpose, Room provides a way to test migrations. The following test code can be used to test whether migration is working properly. In order to test, the schema for each database version needs to be exported beforehand. See Export schemas for more information. Even if you did not export the schema of the old database version, it is recommended that you identify the past version from git tags, etc. and export the schema. The point is to refer to the same values as the migration to be performed in the production code and the test code, as in the variable defined in the list manualMigrations . This way, even if you added migration5_6 in the production code, you can rest assured that the test code will automatically verify it. // production code val manualMigrations = listOf( migration1To2, migration2To3, // 3->4automated migration migration4To5, ) // test code @RunWith(AndroidJUnit4::class) class MigrationTest { private val TEST_DB = "migration-test" @get:Rule val helper: MigrationTestHelper = MigrationTestHelper( InstrumentationRegistry.getInstrumentation(), AppDatabase::class.java, ) @Test @Throws(IOException::class) fun migrateAll() { helper.createDatabase(TEST_DB, 1).apply { close() } Room.databaseBuilder( InstrumentationRegistry.getInstrumentation().targetContext, AppDatabase::class.java, TEST_DB ).addMigrations(*ALL_MIGRATIONS).build().apply { openHelper.writableDatabase.close() } } } Summary Today I talked a bit about Room migration with a few use cases. I'd like to avoid manual migrations as much as possible, but I believe the key to achieving that is ensuring the entire team is involved in table design. Also, remember to export the schema for each database version. Otherwise, it would be a bit difficult for future developers to go back, export the schema using git, etc., and verify it. Thank you for reading. Reference https://developer.android.com/training/data-storage/room/migrating-db-versions?hl=ja
アバター
Hello. I am Ohsugi from the Woven Payment Solution Development Group. My team is developing the payment system used by Woven by Toyota for Toyota Woven City . We typically use Kotlin/Ktor for backend development and Flutter for the frontend. In a previous article , I discussed the process of selecting the frontend technology prior to the development of our web application. Since then, we have expanded our operations and are currently working on seven Flutter applications, including both web and mobile platforms. In this article, I will talk about how our team, which had only backend engineers, came up with ways to efficiently develop multiple Flutter applications in parallel with backend development. Flutter and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC. Overview As mentioned, our team does backend and frontend development for a payment system. It will be a payment application to be used by people, accessible through either a web-based management interface or a Proof of Concept (PoC) on mobile devices at Woven City. In order to develop the Flutter app efficiently in parallel with the backend development of the payment system, we decided to take the following steps. Design a Common Application Architecture Form a Design Policy for Lazy UI Components Define the Tech Stack Unification and Development Flow Design a Common Application Architecture There were various proposals for both backend and frontend architectures over the years, but I think it is best to pick one that suits the development team and product phase and improve along the way. We have adopted a clean architecture for backend development and applied an architecture for Flutter applications that uses only MVVM and repository patterns with a similar layer-first directory structure. To be specific, the directory structure is as follows. Directory Configuration lib/ ├── presentations │ ├── pages │ │ └── home_page │ │ ├── home_page.dart │ │ ├── home_page_vm.dart │ │ ├── home_page_state.dart │ │ └── components │ ├── components │ ├── style.dart // a common style definition │ └── app.dart ├── domains │ ├── entities │ └── repositories // repository interface ├── infrastructures │ └── repositories └── main.dart Directory Roles There are three main directories that make up the layer structure. Here are some brief descriptions of each directory's role. Directory Layer Role presentations Presentation Layer Defines View, ViewModel, and if necessary, the states domains Domain Layer Defines interfaces for domain models, logic, and repositories infrastructures Infrastructure Layer Defines repository tier implementations, including those for API calls When designing with a layer pattern, you may want a use case layer, but there is currently very little business logic in the frontend, so we have included it in ViewModel. The application we are developing does not have complex functions yet, and is basically page = one domain, so we are proceeding smoothly with this design. When creating a new app with a PoC, we usually start with this template so that there are no differences in architecture between applications. Form a Design Policy for Lazy UI Components When designing UI components, we decided not to adopt Atomic Design and to not make too many common components . There are some drawbacks, but we did it this way for the for the following reasons: It was difficult for all members to get the same sense of the levels in the Atomic Design classifications We wanted to focus on page implementation rather than making common components Most importantly, it takes a lot of energy to build an abstract widget with Flutter I think that making common components is more common, but at this point in time, we are in the phase of making changes to the application while flexibly changing specifications, and we decided that it would be more beneficial to not standardize it much in the short term. Define the Tech Stack Unification and Development Flow Many different technologies for state management and screen transition frameworks have come and gone. Beginners get confused about which library to use because there is so much information. I have experienced it as well and can relate. So we decided to use the following tech stack across all applications. Tech Stack Target Library State management and provider creation riverpod Model definition freezed Screen transition go_router API client dio , openapi-genrator Project management melos :::message We are doing schema-driven development using OpenAPI and automatically generate frontend API clients using openapi-generator based on the OpenAPI schema yaml file created in during backend development. ::: We use Riverpod for the state management and provider creation . Riverpod's concept of providers is not well-suited for backend development, and since it is possible to implement the provider as you like by coding by hand, the implementation flow and application location are defined somewhat strictly. Make sure to use riverpod_generator to generate the provider The provider is used when binding the infrastructure layer repository to the domain layer interface @riverpod Future<HogeRepository> hogeRepository(HogeRepositoryRef ref) { final apiClient = await ref.watch(openApiClientProvider); return HogeRepositoryImpl( apiClient: apiClient.getHogeApi(), ); } The ViewModel is implemented with AsyncNotifierProvider, and the provider of the repository required by View is aggregated into ViewModel @riverpod class HogePageViewModel extends _$HogePageViewModel { @override Future<List<Hoge>> build() { return _fetchData(); } Future<List<Hoge>> _fetchData() { repository = ref.watch(hogeRepositoryProvider); return repository.getList(); } Future<void> registerData(Hoge hoge) { repository = ref.watch(hogeRepositoryProvider); return repository.register(hoge); } } View monitors the AsyncValue from the ViewModel and displays the UI. Alternatively, CRUD is processed to the repository via ViewModel As mentioned above, we defined the process from repository implementation to embedding the UI and backend. Tickets are divided by particle size according to the process when making sprint tasks. Conclusion As the client application development priority rose in the project, we established a frontend development policy and came up with ways to develop smoothly as a team. Since many web management screens have a basic set with a list screen, details screen, and editing screen, we are also thinking about implementing measures to implement UI more efficiently using code generators in the future.
アバター
はじめに こんにちは!KTCでデータサイエンティストをしている和田( @cognac_n )です。 2024年1月にKTCにおいて「 生成AI活用PJT 」が発足し、この度そのメンバーとしてアサインされました。今回はこのプロジェクトについて紹介をしようと思います。 生成AIとは 文字通り「新しいデータを生成するAI」を指します。2023年11月にOpenAIがChatGPTを公開したことで、一躍注目を集めるようになりました。 AIはこれまで幾度もの一時的なブーム(*1)を繰り返してきましたが、生成AIの発展による「第4次AIブーム(*2)」は単なるブームを超えて、生活や仕事への定着がはじまっていますね。これからますます盛んになるであろう生成AIの活用は、生活の常識、仕事の常識の多くをひっくり返してしまうほどのインパクトがあると、私は考えています。 これまでの取り組み プロジェクトの発足は2024年1月ですが、生成AIの活用には以前から取り組んでいました。取り組みの一部を、少しだけ紹介します。 AI ChatBotを社内用のSlackBotとして内製開発 生成AIをテーマにした社外向けハンズオンイベントの開催 生成AIツールの社内普及活動 生成AIを活用した、カスタマーセンター業務のDX 生成AIを活用した、新規サービスの企画〜開発 などなど。しかし実は工数事情により、泣く泣く着手ができなかった取り組みもたくさんあります・・・。 今回、プロジェクトとして正式に組織が立ち上がったことで、さらに幅広く、生成AIの活用を推進していけると思います。今から楽しみです! プロジェクトが目指すこと 私たちの姿勢 大切にしたいのは、技術を通じて「会社の事業活動に貢献すること」です。 目指すのは、「課題解決型の組織」として社内の課題を圧倒的な「スピード、質、量」で解決していくこと。試して終わりの評論家となるのではなく、あくまでも価値にこだわる組織として活動していきます! 会社に与えたい影響 従業員のひとりひとりが、当たり前に生成AIを活用できる会社となることを目指します! ・・・とはいうものの、それってどんな状態でしょうか? 「この業務は生成AIに向いている、任せられる」と気づくことができる タスクに応じた、基本的なプロンプトの作成ができる 「生成AIによるアウトプット」を受け入れることのできる文化が醸成されている 例えば、こんな状態でしょうか。激しく変化する生成AIの世界で、 どんな姿を目指していくべきか?は常に考え続ける必要があると思います。 そのために・・・ プロジェクトでは現在、生成AIについての取り組みを3つのレベルに分けて考えています。 レベル1:既存のシステムで、まずは「やってみる」 レベル2:最低限の開発で、更に価値を生み出す レベル3:事業に与える価値を最大化する レベル分けと取り組みの進め方は、以下のようなイメージです。 生成AIについての取り組みレベル分け 取り組みの価値を見積りながら、適切なレベルを目指していく これは全ての取り組みでレベル3を目指すべきという意味ではありません。レベル1の段階で十分な価値の創出ができれば、費用と工数をかけてレベル2に進める必要がない場合もあるでしょう。重要なのは、たくさんのアイデアをレベル1で「やってみる」こと。その為には、会社の非エンジニアも含めた全社員がレベル1を実施可能なくらい、高いAIリテラシーを備えていることが理想的です。 今後、取り組んでいきたいこと 伴走型の「やってみる」から 社内で使える生成AIツールが導入されて数ヶ月経ちますが、まだまだ「何ができるか分からない」「いつ使えばいいか分からない」という声を聞きます。まずは生成AIの知見がある私たちが「どんな業務が生成AIに向いているか」「どんなプロンプトを書けばいいのか」と丁寧にサポートしながら、生成AIの活用事例を増やしていこうと考えています。 最初は丁寧なサポートを受けながら「やってみる」 社内で生成AI活用した価値の創出事例を増やす 社内の生成AI利用を「当たり前」に 自律型の「やってみる」へ 伴走型の課題解決だけでは私たちの工数がボトルネックになってしまい、スケールしません。業務担当者自身が「この業務は生成AIに向いている、任せられる」と気づき、基本的なプロンプトでレベル1の活用を「やってみる」ことができるようにしていきたいです。 レベル1の活用を業務担当者自身ができるようにする レベル1の改善や、レベル2へ進むための相談を私たちが受け付ける それらを実現する為の研修 社員のAIリテラシーを底上げするための内製研修を充実させます。 多くの社員が生成AIについて共通の認識を持ち、生成AI活用についてスムーズな会話ができたり、生成AIによるアウトプットを受け入れることができる文化の醸成を狙います。 内製のITリテラシー研修を充実させる 職種やスキルレベルに合わせて、研修を作り分ける 画像生成編、要約編、翻訳編など細かな粒度で実施する 受講者のフィードバックをもとに、本当に必要とされている研修を短納期で提供する 情報発信 このテックブログをはじめ、様々な媒体で私たちの取り組みを発信していきます。生成AIの技術レビューや、プロジェクトの取り組み紹介など、さまざまなコンテンツを計画中です。ぜひお楽しみに! おわりに ここまで読んでいただき、ありがとうございました! 抽象的な話が多くなりましたが、私たちのように、生成AI活用を目指す方の参考となれば幸いです。 参考文献 [*1] 総務省. "人工知能(AI)研究の歴史". (参照 2024-01-16) [*2] 野村総合研究所. "生成AIで変わる未来の風景". (参照 2024-01-16)
アバター
はじめに こんにちは。KINTOテクノロジーズ(KTC)グローバル開発部のフロントエンドエンジニア、Daichiです。現在は KINTO FACTORY のECサイトを開発しています。KINTO FACTORYは、トヨタ車とレクサス車のオーナー様向けの車体アップグレードサービスです。3つのサービス(リフォーム、アップグレード、パーソナライズ)を通して、最新のハードウェアとソフトウェアを車体に取り入れていくことができます。 成長が著しいECサイトとしては、より多くのユーザーにリーチし、より良いユーザーエクスペリエンスを提供していきたいので、SEOとページ読み込み時間が重要になります。Core Web Vitalsスコアを最適化してKINTO FACTORYのSEOとページ読み込み時間を改善していった過程をご紹介したいと思います。以下に詳しく説明していきます。 Core Web Vitalsとは? Core Web Vitals は、ページの読み込みパフォーマンス、インタラクティブ性、視覚的安定性に関する実際のユーザー エクスペリエンスを測定する一連の指標で、Googleが開発しました。 Googleは、2021年5月、SEOに影響する検索順位を決定する要素として、Core Web Vitalsを発表しました。 2023年現在、主要なCore Web Vitals指標は3つあります。 Largest Contentful Paint (LCP) -端末(PCやスマートフォンの画面)で最も重い画像またはテキストのまとまりを読み込むのにかかる時間を測定。 First Input Delay (FID) -ユーザーがページに対して操作(ボタンクリック、タップ、入力など)をした際にブラウザが応答するのにかかる時間を測定。 - Interaction to Next Paint (INP) という類似の指標があるが、こちらは最初の読み込みが終わった後の応答性に関するもの。Googleは、2024年3月よりFIDに代わりINPを採用することを発表。 Cumulative Layout Shift (CLS) -ウェブページ読み込み中の視覚的安定性を測定。 最適化前 ウェブサイトを測定するツールはたくさんありますが、私は Google PageSpeed Insights をお勧めします。改善の対象となるエリアについての詳細レポートを取得でき(問題となる箇所の改善方法もわかります)、自分のページのリアルなパフォーマンス(Chromeブラウザのデータに基づく)の状態を確認できます。 最適化前のKINTO FACTORYのスコアは、モバイルとデスクトップで以下の通りでした。上記の図(Core Web Vitalsのしきい値)を参照した上で、下図の結果を見ていただくとわかるように、赤字部分は好ましくない結果になっています。 最適化前 モバイル デスクトップ レポートを分析したところ、ページ読み込みが遅い主な要因は画像でした。 先頭ページで読み込まれるアセット(特に画像)が重すぎた。モバイルで約13MB、デスクトップで約14MB。 画像サイズが大きすぎた(ほとんどの画像は300 KB以上)。 画像サイズが画面サイズに合っていない。モバイルとデスクトップで同じ画像を使用している。 Largest Contentful Paint画像は読み込みが遅くなる。 幅と高さが指定されていない画像要素が原因で、ウェブサイト全体のレイアウトがずれてしまう。 マークアップとCSSの実行の仕方により、モバイルとデスクトップそれぞれで固有の画像を毎回読み込んでいる。 モバイルアセットのサイズ (変更前) デスクトップアセットのサイズ(変更前) 最適化後 Webサイトのパフォーマンスを測定し、ページの速度を低下させる問題が複数あることがわかったので、対応を開始しました。 KINTO FACTORY最適化にあたり行ったことは以下の通りです。 - 全画像をチェックし、画面サイズに従って最適化。- 画像ごとに適切なフォーマットを使用。 ファーストビューのLargest Contentful Paint画像を始めとしてwebp画像も使用。 ファーストビューに表示されない 読み込みが遅い 画像は、必要なタイミング(画面表示するとき)で読み込みされるようにする。 同時に、読み込みが遅い画像はファーストビューに設定しないようにする。- レイアウト変動が起きないよう、画像の高さと幅を設定する(特にファーストビューにあるLargest Contentful Paint画像)。 フォント読み込み(Googleフォント)の速さ向上のため rel=preconnect resource hint と早い段階で接続。 画像エレメントはレンダリングされていたが、結果的にはスタイリング(css)だけで表示されていたため、各ページ読み込みの際に不必要な画像を読み込んでしまうモバイル、デスクトップのマークアップによるレイアウトは回避。 以下のコードの通り: ```html <!-- Before --> <img src="pc-image.png" class="show-on-desktop-size" /> <img src="sp-image.png" class="show-on-mobile-size" /> <!-- After --> <picture> <source media="(min-width: 600px)" srcset="sp-image.png" /> <img src="pc-image.png" alt="🙂" /> </picture> ``` 上記の最適化を実装した結果、次のことが可能になりました。 アセットサイズを 60% 以上削減。 ページ読み込み時間を改善。 Cumulative Layout Shift (CLS) をほぼゼロに削減。 最適化後(モバイル画面/デスクトップ画面) 最適化前 最適化後 モバイルアセットのサイズ (最適化後) デスクトップアセットのサイズ (最適化後) 結論 Core Web Vitalsは、Webサイトの全体的なパフォーマンスを測定できる優れた方法です。各レポートから分かるように、アセット(画像、フォント)をシンプルに最適化するだけで、ユーザーエクスペリエンスが向上し、検索結果で上位にランクされ、SEOが向上します。KINTO FACTORYの第一歩として、トップページの最適化を行いましたが、大きな一歩だったと思います。ただし、最適スコアにはまだ達していないので、さらに対処を進めて、全てのユーザーに最高のエクスペリエンスを提供できるようにしたいと思います。
アバター
Introduction Hi! Thank you for your interest in my article! I am Yutaro Mikami, an engineer in the Project Development Division at KINTO Technologies. I joined the company in September this year and usually work as a front-end engineer on the development of KINTO FACTORY. In this article, I will write about my experience and efforts since joining KINTO Technologies, focusing on the theme of "Agile." Topic As indicated by the title, I will talk about the initiatives we have undertaken to ensure accurate progress management in our team's Agile development, where our burndown chart shows that the actual work line is consistently above the ideal work line. (Good!👍) Main Body What Progress Management Should Be Burndown charts provide a quick overview of the decreasing remaining workload, offering the following effects and benefits: Reporting progress to stakeholders Maintaining visual motivation for developers Early detection of task stoppers Promoting team cooperation and collaboration Definition of Current Issues With the above in mind, I will use my team's burndown charts for a sprint and summarize the issues I've identified. Burndown chart before Kaizen (improvement) As the work progresses, the graph goes down naturally, but the size of the gap between the graph rising irregularly and the ideal line at the end is noticeable. Conclusion Reporting progress to stakeholders The report becomes unreliable because the team is unsure whether the rise in the graph is intended or not. Maintaining visual motivation for developers There is no downward trend in the graph, making it difficult to maintain motivation due to a lack of successful experience. Early detection of task stoppers Since we report progress on a daily basis, the team is aware that the task progress is falling behind. However, spotting task stoppers from the graph proves challenging, making it difficult to see the task progress is stagnating. Promoting team cooperation and collaboration There is adequate communication on a daily basis, but few cases of cooperation and collaboration through charts. Kaizen Goals A goal is not the end of the process, but for the sake of clarity, I used the word "goals" to describe what a team should aim for. The defined goals in this article are as follows: Be able to understand and control the progress of tasks through charts Be able to recognize the reasons for the rise while allowing the graph to rise as a team A sense of accomplishment through charts should be felt by each developer Be able to promote cooperation and collaboration throughout the team After starting Kaizen We are still in the process of Kaizen , but the chart is currently on an improving trend. Latest burndown chart What We Did Step 1: "Cultivating Awareness" Kaizen Stop the approach of simply stacking tasks into the sprint. This was the only specific action, but I think it was very effective. By having this awareness within the entire team, we have successfully reduced the gap between the ideal line at the end of the sprint. In addition, the accuracy of velocity has improved, with the expectation of further enhancements to estimate precision. What We Did Step 2: "Planning" Kaizen Set up a "place to store tickets in the next sprint" The breakdown of additional tickets to be stacked during the sprint includes two main categories: Forgotten tickets during planning Tickets added during the sprint The tickets added during the sprint have several factors, and as it was difficult to improve in the short term, we first took action to prevent them from being left in the sprint. We created a "task placement tickets to be stacked in the next sprint" to the backlog and started to discussions on the tasks listed above (red frame in the image) during the planning process. This led to the realization of the following effects. Prevention of forgetting to stack By adding action of moving tasks to the top of the backlog before planning eliminates forgetting to stack. Improvement of task comprehension The breakdown time per ticket has improved each member's understanding of the task. Stack the appropriate number of tasks into the sprint This is also linked to the Kaizen of Awareness, providing the opportunity to select the tasks to be stacked, rather than just stacking them anyway, allowing the appropriate number of tasks to be added in a sprint. What We Did Step 3: " Kaizen Meetings" In addition to the retrospective, time was set aside for members to discuss issues and improvements in our daily Scrum activities. Discussing issues to be addressed in the short to long term and deciding on next actions raised the team's awareness. Results Reporting progress to stakeholders The report becomes unreliable because the team is unsure whether the rise in the graph is intended or not. Improvements in graph accuracy and reliability have made it possible to report accurate progress. Maintaining visual motivation for developers There is no downward trend in the graph, making it difficult to maintain motivation due to a lack of successful experience. There is a clear downward trend and successes have been achieved. Actions on tickets added during the sprint are a future challenge. Early detection of task stoppers Since we report progress on a daily basis, the team is aware that the task progress is falling behind. However, spotting task stoppers from the graph proves challenging, making it difficult to see the task progress is stagnating. Continue to report progress on a daily basis. I get the impression that the stoppers are ready to be seen from the graph. Promoting team cooperation and collaboration There is adequate communication on a daily basis, but few cases of cooperation and collaboration through charts. There have been no cases of cooperation and collaboration through charts yet, but I feel that we have been able to build a system that allows cooperation and collaboration, because we can now better understand who is working on which tasks than before. Conclusion So this will conclude my article about conducting Scrum Kaizen as we analyzed our burndown charts. Thank you for reading till the end. I have once again realized that iterating Kaizen is necessary not only for products but also for teams and processes to continue Scrum as a team. Objective improvements using reports and charts that are based on data makes it easy to identify issues, and visible improvements help maintain motivation. I hope this helps your team's Kaizen as well! Lastly, KINTO FACTORY, where I belong, is looking for people to join us! If you are interested, feel free to check out the job openings below. @ card @ card
アバター
はじめに こんにちは。KINTO Technologiesのグローバル開発部でフロントエンド開発をしているクリスです。 今日はフロントエンドの開発におけるちょっとした詰まったこととそれの解決策について紹介したいと思います! 詰まったこと 普段みなさんは以下のようにアンカータグ(aタグ)を使ってとあるページの特定部分までスクロールさせたい時ありますよね? スクロール先の要素にidを付与し、aタグに href="#{id}" をつければそれが実現できます。 <a href="#section-1">Section 1</a> <a href="#section-2">Section 2</a> <a href="#section-3">Section 3</a> <section class="section" id="section-1"> Section 1 </section> <section class="section" id="section-2"> Section 2 </section> <section class="section" id="section-3"> Section 3 </section> 記事や規約など長いページだと、ユーザーにとって役に立ちます。 しかし、現実では多くの場合、ヘッダーといったページの上に固定する要素があって、aリンクをクリックし、スクロールされた後に少し位置がずれてしまいます。 例えば以下のようなヘッダーがあるとします。 <style> header { position: fixed; top: 0; width: 100%; height: 80px; background-color: #989898; opacity: 0.8; } </style> <header style=""> <a href="#section-1">......</a> <a href="#section-2">......</a> <a href="#section-3">......</a> ... </header> あえてこのヘッダーを少し透過にしましたが、aリンクをクリックして、移動になった後に、一部のコンテンツがヘッダーの後ろに隠れてしまったことがわかります。 HTMLとCSSだけを用いた解決策 aリンクをクリックした時に、Javascriptでヘッダーの高さを取得し、スクロール位置からヘッダーの高さを引いてスクロールさせれば問題解決できますが、今日はHTMLとCSSを用いた解決策を紹介したいと思います。具体的には本来到達したい <section> より少し上に別の <div> を用意し、その要素までスクロールさせる方法です。 先ほどの例に戻って、まず各セッションの中に一つのdivタグを作ります。そして該当divタグに一つのclass、例えば anchor-offset を付与し、さらに元々 <section> タグに付与したidも新しく作った div タグに移します。 <section> <div class="anchor-offset" id="section-1"></div> <h1>Section 1</h1> ... </section> そしてcssで <section> タグと .anchor-offset のスタイル定義をします。 /* アンカーを設置する必要がある要素のみ付与したい場合はclassを利用 */ section { position: relative; } .anchor-offset { position: absolute; height: 80px; top: -80px; visibility: hidden; } 上記のように設定すると、aリンクをクリックした時に、該当する <section> の本位置ではなく、それより少し(例の場合では80px)上の部分までスクロールされ、ヘッダーの高さ(80px)と相殺されます。 Vueにおける書き方 Vueでは値を cssにバインドする ことができます。この機能を利用し、高さを動的に設定しコンポーネントにすれば、さらにメインテナンスしやすくなると思います。 <template> <div :id="props.target" class="anchor-offset"></div> </template> <script setup> const props = defineProps({ target: String, offset: Number, }) const height = computed(() => { return `${props.offset}px` }) const top = computed(() => { return `-${props.offset}px` }) </script> <style scoped lang="scss"> .anchor-offset { position: absolute; height: v-bind('height'); top: v-bind('top'); visibility: hidden; } </style> まとめ 以上、aタグでページの特定部分までスクロールする際にヘッダーなどの固定要素に合わせたスクロール位置の調整方法でした。 他にも色々なやり方がありますが、ご参考になれたらと思います!
アバター
👋Introduction Hello! I am Sasaki, a Project Manager in the Project Promotion Group at KINTO Technologies. In my career to date, I have worked as a programmer, designed as a Project Lead, trained members, and handled tasks akin to those of a Project Manager (defining requirements, managing stakeholders, etc.). In my previous job, I worked on Agile with the whole team for about three years and went through a real Kaizen (improvement) journey. As I am passionate about this topic, I really wanted to write an article about Agile development today! 🚗Toyota and Agile How are you incorporating Agile development methodology into your team? There are various forms of Agile development, such as Scrum for new services and Kanban for operation and maintenance. However, when learning Agile development, many of you may have encountered Lean Development and the Toyota Production System, which is said to be the origin of Agile development[^1]. In this article, I will visualize the approaches to Agile of KINTO Technologies, a Toyota group company. I also hope to help those who are working on Agile in the company gain new insights through visualization. [^1]: Agile books citing Toyota The Agile Samurai Lean from the Trenches Kanban in Action and more ::: message ### This article is useful for - those who want to understand their team's Agile state - those who are a bit stuck in a rut when it comes to how to proceed with Agile - those who are facing challenges reconciling Agile ideals with their realities - those who want to know about KINTO Technologies' approach to Agile ::: Method Quantitative visualization of each team's level of Scrum with the Scrum Checklist Discussion while reviewing the results of Step 1 Casually sharing teams future plans First, use the Scrum Checklist to visualize how much of each Scrum indicator have you accomplished so far. ![Sample: Results of Scrum Checklist](/assets/blog/authors/K.Sasaki/image-20231120-002531.png =400x) Once visualized, let discussions begin. Use the 4L Reflection Framework for discussion. https://www.ryuzee.com/contents/blog/14561 :::details Notes on the use of Scrum Checklist The provided Scrum Checklist has a note. Do not use it to compare with other teams for evaluation. It is not intended to compete with other teams. Instead, we use it as an opportunity for discussion to place different Agile teams in a similar context. If you use it in a similar way to this article, please avoid using it as a way to judge or evaluate people or teams, and use it among members in a constructive and mature manner. ::: 🎉Participating Members We asked for cooperation from Scrum Masters -or people in similar positions- who manage Scrum or Agile-like teams in their organization, and 10 teams (10 people from different teams) came! Thank you all for your time and cooperation! How we did it ✅ Scrum Checklist We made various charts. The results varied widely depending on the team's situation, such as some people saying "Although what we do is close to Waterfall, I am running a Scrum event", or others expressing "I felt that there were some issues, but the score came out higher than expected." Some teams had indicators with low scores but no major current issues, such as "We do not have a Scrum Master, but we are rotating Scrum events among developers," or "We do not have a product backlog, but we have a good relationship with the owner." Since each of the participants had different areas of expertise, we were able to encourage mutual learning by having participants teach each other about indicators that some were less familiar. Many teams in Group A had organized backlogs, while many teams in Group B were experiencing challenges with their backlogs. Maybe we can exchange knowledge on organizing backlogs...👀 📒Reflection (4L Reflection) We split into two groups and reflected. In my previous career, I tried hard to get people to speak up, but at KINTO Technologies, the board filled up in 5 to 8 minutes, giving the impression that they were active in sharing their opinions. The red sticky notes are their impressions after seeing other people's notes. Group A Results Group B Results This time, we used the WhiteBoard, a new addition to Confluence recommended by Kin-chan . Sticky notes can be converted directly into JIRA tickets, which can be used to organize action items. 🚩Results of the Reflection Here is some of the feedback among the many voices. Many people expressed a desire to strengthen relationships with product owners (POs) to optimize the use of their services and get faster. I got the impression that many teams were highly self-organized. Liked Visualization helped us understand the team's strengths and weaknesses We were able to understand the areas where we diverged from the ideal Scrum That developers are able to work responsibly and autonomously (self-organized) Lacked Product Owners are not present or not included in many Scrum events Story Point (SP) setting and estimation are not done well Due to the increase in team members, some feel the need to split up the team. Learned I was able to learn about different Agile initiatives and products in our company Sprint periods can be set shorter or longer depending on each team's situation Longed for (excerpt) Although not action items, they were able to set themselves informal goals to keep growing. To revise the length of their Sprints To split teams into smaller ones To improve communication with POs 💭Thoughts I was really surprised that people from different departments and offices, some of whom I had never met before, participated when I called them to gather for this session, even though it was my second month in the company. I would like to thank everyone again for their cooperation. By bringing together the people who practice Agile in the company, I made the following discoveries and learned the following lessons as a facilitator. Scrum checklists can be used to quantitatively visualize a team's level of Scrum Listening to other teams at different stages of their Scrum journey can provide an opportunity for improvement and courage in our activities Connecting Scrum Masters from different teams created an opportunity to find like-minded individuals to ask for advice on various issues. We were able to find issues that were common across teams (such as the need to improve communication with POs and to split teams) I did not participate actively as much as I was focused on facilitation, but when I heard a participant say, "I was on the brink of giving up, but learning about everyone's activities encouraged me," I was almost moved to tears. In the face of Agile challenges in my career, I augmented my solutions and empathy by engaging in external study groups and reading relevant books. I think it's always great to be able to share these challenges within the company and have someone to discuss issues with. 🏔Summary: Which Agile Milestone Are We at Now? At KINTO Technologies, our development approach adapts to the nature of the project. For large-scale projects, Waterfall is more common, and we use Agile for other project types. This time, we tried to visualize the level of Agile within the company from the perspective of Scrum, and found that each team has various ways of approaching Agile and their issues. So... which Agile milestone are we at now? To this question, we found no clear answer! (Sorry!) However, I feel that by gathering with other Scrum Masters, we went further down the Agile path together! ✨ What I Want to Do in the Future I am in a cross-sectional team called the Project Promotion Group. I know this is a bit presumptuous since I just joined the company, but I hope to use this as an opportunity to help promote cross-team development through initiatives such as Scrum of Scrums and reflection of reflections (meetings where team improvements are shared with other Scrum Masters). Agile Samurai ends with the words, "It doesn't matter if it is Agile or not!" I would like to continue to kaizen as much as I can and continue climbing Mount Agile together with all of you. Be Agile! Thank you for reading this article.
アバター
Introduction Hello! I am Uemura from KINTO Technologies' Development Support Division. As a corporate engineer, I am mainly responsible for mobile device management (MDM). We recently held a "case study presentation & roundtable study session, specializing in the field of corporate IT" under the title " KINTO Technologies MeetUp! - 4 cases to share for information systems by information systems ." In this article, I will introduce the contents of the case study "Advancement of Windows Kitting Automation: Introducing Windows Autopilot" which was presented in our recent study session, along with supplementary information. What is Windows Autopilot? Windows Autopilot is a way to register Windows devices in Intune. By pre-registering the hardware hash (HW hash) of the device, it is automatically registered in Intune during the setup. I would like to talk about how we introduced this Windows Autopilot into KINTO Technologies' environment to improve the efficiency of PC kitting. How Windows Autopilot automates kitting Firstly, I will explain the mechanism of kitting automation with Windows Autopilot. The vendor or administrator should register the HW hash of the PC in Intune in advance before kitting. Then, the user (or administrator) starts the PC and signs in. By pre-registering the HW hash, the PC will be automatically registered in Intune. By creating a dynamic group that includes the PC registered with Windows Autopilot, any PC registered in Intune will automatically enrolled to that dynamic group. By assigning the dynamic group shown in step 3 to each configuration profile and app deployment settings, device control and app deployment will be performed automatically. As Windows Autopilot itself is responsible for the device registration function, it falls under step 1 and 2. Dynamic groups should be utilized so that profile control and app deployment of registered devices can be done automatically at step 3 and 4. In other words, it is possible to automate kitting by configuring not only Windows Autopilot registration settings but also device control settings and app deployment to run automatically. Introducing Windows Autopilot It took us about a month and a half to introduce Windows Autopilot, including research and verification. In KINTO Technologies' environment, the HW hash registration of all PCs had already been completed, so only the following two things were done this time. Assigning the Autopilot profile, which is the first thing to be executed during kitting, to the HW hash that had already been registered. Replacing static groups for kitting that have been used for kitting with dynamic groups Autopilot Profile Configuration Assigning the Autopilot profile to the HW hash determines that the corresponding PC is registered in Intune by the Windows Autopilot method. As for the contents of the profile, it is possible to set whether or not to skip the selection such as "Language Setting" and "Windows License Agreement" that are mainly selected on the PC setup screen. Dynamic Group Configuration Since Autopilot-registered devices have an Autopilot device attribute, set this attribute to a dynamic membership rule. *For details, please refer to the following Microsoft site. Create a device group for Windows Autopilot | Microsoft Learn Then specify the dynamic group you created to the "Assign" in configuration profile and app deployment settings. This makes it possible to automate the process from device registration to device control. This completes the kitting automation with Windows Autopilot. Results of Introduction How effective has the introduction of Autopilot been? As a quantitative result, we were able to reduce the work items by about 40% compared to before the introduction. On the other hand, we were not able to achieve that much efficiency in working hours due to the significant time required for installing apps and performing Windows Updates. As a qualitative result, the automation and simplification of kitting process has made it less likely to cause human errors, such as work omissions. Conclusion Ideally, I would like to achieve so-called zero-touch kitting, but if asked whether it has been achieved with the introduction of Autopilot, manual work is still necessary. However, I think that being able to automate a series of processes from device registration to device control has greatly improved the efficiency of PC kitting. We will continue to incorporate new features in our ongoing efforts to further improve efficiency!
アバター
自己紹介&どんな話? グローバル開発部のYuki.Tです。グローバル向けプロダクトの運用や保守を担当しています。 グローバル開発部のメンバーの国籍は様々で、話す言葉も様々です。なかでも私のチームには「日本語が話せないメンバー」と「英語が話せないメンバー」が混在しています。そのためチーム内でのコミュニケーションを成立させるために、色々な工夫(苦労?)をしてきました。今回はその工夫の内容と、その過程で得られた気づきを紹介したいと思います。 結論 - 「英語無理だったら、日本語でおk。」 - 「喋れないんだったら、せめて書こう。但し正確に。」 - 「大変だけど、頑張るだけの価値はあるよ。」 はじめに(どんなチーム?) グローバル開発部内の私のチーム(運用保守チーム)でのお話です。約8名。 そのメンバー構成と仕事の進め方は、下記の様になっています。 国籍 プロパー社員とパートナー会社メンバーが混在。できて約1年のチーム。 できた当初は日本語メンバーだけ。その後に外国籍メンバーもジョイン。 勤務スタイル リモートと出社のハイブリッド。Agile開発(Scrum)。 各種Scrum Eventも、リモートと出社のメンバーが混在。 コミュニケーション 連絡はSlack、ミーティングはTeamsが主。 タスクやドキュメント管理は、Atlassian Jira, Confluence。 メンバーの語学力は様々ですが、日本語メインの人が大勢を占めます(6人中8人)。 分類 語学力 人数 A 英語オンリー。日本語は全く分からず(外国籍) 1名 B 英語メイン。日本語は日常会話程度(外国籍) 1名 C 日本語メイン。英語は日常会話程度(日本国籍) 2名 D 日本語メイン。英語は話せない。読み書きは何とか(日本国籍&外国籍) 4名 ちなみに私(日本国籍)は上記の"C"です。TOEIC 800くらい。 話すだけなら多少はいけますが、込み入った議論になると、途端に語彙不足が露呈します。あとリスニングが結構苦手…。 つぎに(YWT) チームができた当初は、上記の分類でいうと"C"や"D"の人(以下「日本語メンバー」)ばかりで、コミュニケーション手段も基本的には日本語オンリー ^2 でした。その様なチームに、英語メインの"A"や"B"の人(以下「英語メンバー」)にジョインしてもらうため、色々な事を試してみました。 ここではその結果を、ふりかえりの手法のYWT[^3](やった、わかった、つぎやる事)っぽくまとめてみます。シチュエーション毎に、「1.連絡手段(Slack)」「2.ミーティング(Teams)」「3.ドキュメント(Confluence, Jira)」の3つに分けて、紹介していきます。 [^3]:YWT(やったこと・わかったこと・次にやること) | 用語集 | 株式会社 日本能率協会コンサルティング https://www.jmac.co.jp/glossary/n-z/ywt.html 1. 連絡手段(Slack) やった 「日本語分からなくても、翻訳ツールにコピペで訳せば読めるよね?」 分かった 「訳して読んでくれるのは、最初のうちだけ。」 いちいちコピペで訳すのは、結構面倒なんですよね(自分でやってみると分かる)。 自分がメンションされていても、実際はあまり自分に関係しないケースも意外と多いため、「手間かけて訳してみても徒労に終わる」という経験が積み重なり、だんだんコピペ翻訳する気力が削がれていく様子。 またSlackは、単体のメッセージだけでなく、スレッド全体を読まないと意味が掴めないケースも多いです。それもまた翻訳のしにくさに繋がっている様です。 次にやった 「日本語と英語を併記しよう」 本当に伝えたいメッセージの場合は、英語でも書く様にしました。 コツは「日本語全部を訳そうとしない」という事。公開できる良い例が無かったのですが、例えばこんな風に。 何の用件かは英語で分かる様にし、詳細が気になる用件なら、残りは自分で翻訳してもらうなり、個別で訊いてもらう様にしています。全部訳そうとすると、送り手側も大変ですしね。 2. ミーティング(Teams) やった ①「日本語で話すから、Teamsの翻訳機能で字幕を読んどいて」 ②「英語が苦手な人も、頑張って英語でしゃべるんだ!」 ③「よし、じゃあ私が全部通訳してやるぜ!」 分かった ①「読んでも意味が分からない」 「口語の日本語⇒英語の機械翻訳精度は、まだまだ低い」というのが結論です。 特に少人数でのカジュアルなミーティングでの日本語は、言い淀んだり、主語や目的語が曖昧だったり、複数人が同時に話したりと、機械翻訳にとって色々悪条件が重なるのも一因なのでしょう。 ②「誰も幸せになれない」 喋ってる本人も「…これでいいのかな?(不安)」と思いながら、途切れ途切れに話す拙い英語は、日本語メンバーにも英語メンバーにも伝わらず…。あとは「英語で何て言うか分からない事は、そもそも喋らなくなる」ので、日本語で話す場合と比べて、みんな寡黙になりました…。ミーティング自体は早く終わるけど、得られる情報が少ない…。 ③「終わらないミーティング」 日本語メンバーが話した後に、私が英語で話す事になるので、単純計算で2倍の時間が掛かります。さらに日常会話+α程度の私の英語力だと、"how can I say..."と、どう訳せば良いか、つまづいてしまう事もしばしばで、時間はさらに延びる…。 そして英語で話してる間は、日本語メンバーはただぼんやり待ってるだけになります。そのため、だらけたミーティングになりがちでした。 次にやった 「英語が苦手な人は、日本語でOK」 英語が苦手な人には、無理せず日本語で話してもらう様にしました。そして英語メンバーが関係する内容に絞って、私が通訳する事にしました。これによって、ミーティング時間が極力伸びない様にしました。 「話すのが無理なら、せめて書こう」 これだけだと、英語メンバーに伝わる情報が減ってしまいます。なのでミーティングメモを、なるべく詳しく書く事にしました。そうする事で、その場では分からなくても、後でブラウザの翻訳機能で読んで貰う事ができます。ちなみに聴いた言葉をそのままメモするため、メモは日本語と英語が混在する事もあります。 「それでもやっぱり、努力は必要」 それでもSprint Retrospectiveの様に、事後では無くリアルタイムで意味を伝えないといけない場面もあります。そんな時は時間が掛かってしまっても、その場で訳を書き加えています。例えばこんな風に(青字)。![retrospectiveコメントの例](/assets/blog/authors/yuki.t/image-sample-retro.png =428x) Sprint Retrospectiveの場合は、みんながKeepやProblemのアイデアを口頭で説明してる隙に、私がコソコソと訳を書き加える等、うまく隙間時間を活用しています。 3. ドキュメント(Jira, Confluence) やった 「日本語で書くから、ブラウザの翻訳機能で読んどいて」 分かった 「Confluenceは割とOKだけど、Jiraはちょっと厳しい」 主にConfluence上にある設計書や仕様書などは、比較的きちんと翻訳されます。またグローバル開発部のドキュメントは、元々英語で書かれているものも多いので、そういうものは手間いらずです。 ただJiraチケットのコメントの翻訳精度はイマイチでした。正式なドキュメントと異なり、チケットのコメントは主語や目的語が省略されている事が多い事が主因の様です。日本語ネイティブが読んでも意味が分からない「自分専用メモ」みたいな日本語コメントもあったりするので、当然と言えば当然です。 次にやった 「正確に・簡潔に書こう」 主語や述語、目的語を省かずに書く事を心掛けました。また、なるべく簡潔な文章で書く様にしました(箇条書きとか推奨)。こうする事で、ブラウザの機械翻訳の精度が上がります。 得られたもの これらの「次にやった」の取り組みのお陰で、現在ではチーム内のコミュニケーションがある程度機能してきました。またその他にも、以下の様な効果がある事が分かってきました。 情報が記録される 些細なミーティングでも、ちゃんとメモを取る習慣ができてきました。その結果、「あの時どういう結論になったんだっけ?」と振り返りたい時に、困る事が減りました。 暗黙の了解が減る 適切な英語に訳すには、日本語の状態では隠れている主語や目的語を、明確にする必要があります。そのため「誰が」これをやるのか、「何を」対象にその変更を加えるのか、を確認する機会が増えました。 やってみると分かるのですが、「誰が」や「何を」がハッキリしていない事は、意外と多いです。そういう場面で「これは〇〇さんが担当してくれるんでしたっけ?」と確認する機会が増えるので、タスクの取りこぼしを減らす事もできます。 あと、「(〇〇さんがやってくれないかな、でも訊きづらいな…)」と遠慮して訊けなかった事も、「英訳するため」という目的があると訊きやすい、という側面もありました。 多様な意見が出せる・得られる 暗黙の了解が減って明確なコミュニケーションが取れる事は、「言いたい事が言える・多様な意見が出せる」に繋がってきた様にも感じます。また英語メンバーからの意見もより多く取り込める様になり、日本語メンバーだけでは気付きにくい視点も得られる様になりました。例えば下記の Try のアイデアです。![retrospectiveコメントの例](/assets/blog/authors/yuki.t/image-sample-retro.png =428x)これは「チケットに、タスクの背景や目的をきっちり書けなかった」という Problem に対する Try だったのですが、1つめのアイデアは、日本でありがちな真面目な内容(失礼)です。それと比べると、2つめの英語メンバーの「まぁちょっと落ち着こう」的な意見は、全く異なる視点からの意見で、私は「なるほどなぁ~」と思わされました。 まとめ 多言語でコミュニケーションを取るには、なかなかの労力が伴います。しかしその苦労は、目先の意思疎通だけでなく、新しい気付きや活発な意見の創出にも繋がっていくのだな、と感じています。 「多様性の実現は、義務やコストではなく、利益でありメリット」 。そう思って、これからも取り組んでいこうと思います。
アバター
I am Gojo, a software engineer at KINTO Technologies. I am doing backend development for a mobile app called Prism Japan that uses AI to suggest nice places to go around Japan. I co-authored this article with Saito, the Product Owner of Prism Japan. I will talk about how our relatively large agile team improved its overall development and teamwork. About Prism Japan First of all, let me briefly talk about Prism Japan, the service we are developing. In a nutshell, Prism Japan is a user-friendly app that leverages AI to provide personalized travel recommendations, helping users discover exciting places to explore. Have you ever wanted to make the most of your holiday in Japan, but found yourself unsure of where to go? Prism Japan can be your perfect companion, offering personalized travel suggestions to help you plan a memorable getaway. With the "Search by Mood" feature for example, users can simply select a photo, and the app will suggest places to visit based on the chosen mood. Users can look at photos and search for a travel spot that suits their mood. We released the app for iOS in August 2022 and the Android version in April 2023. As of the end of October 2023, the total number of registered members has exceeded 30,000. Prism Japan's Development System Prism Japan can be used on iOS and Android. The team developing the mobile app is divided into: iOS development team and Android development team.   The backend is divided into the: API development team and AI development team. Development is led by team members who specialize in their respective fields. In addition to the development team, there are teams in charge of planning, analysis, and design. The Product Owner decides the direction of the entire project and thinks about the functions that the user will need. There are 15 members who are mainly in charge of Prism and 20 members who are involved in development sub-tasks. It might be considered a relatively large family for an agile team. How We Switched to Agile Development The Prism Japan development team now works as a single team, but the frontend team and backend team used to work separately. When we needed to coordinate work between the teams, we sent requests through a Slack channel. After we sent a request, it was left completely to the other team. The jobs were divided because the departments were divided, and we had the following issues managing each team. Since the development process was not shared between the frontend and backend, there was no consensus between teams There was no improvement in the entire team across frontend and backend. Project Managers and remote members were less likely to have ownership because development proceeded on a request-to-work basis When the initial development of Prism Japan was over, we discussed switching to a development method suitable for the improvement phase of the app. When we considered development methods that could address the aforementioned issues, we quickly decided to switch to agile development, which allows users to flexibly change specifications while monitoring user reactions and is compatible with mobile apps. There were experts in the teams, and we decided to adopt the Scrum method, which was very advantageous in terms of communication, which was one of the issues. How We Set Up Scrum from Scratch Not all of our team members had experience with Scrum. We needed to convey what Scrum was to our team members. We started off by having our Scrum Master Koyama teach our teams about Scrum. For more details, you can read Koyama's article below. How an iOS Engineer Took Certified Scrum Master Training and Become a Scrum Master Starting with a Study Session When we decided that we wanted to develop using Scrum, we started by having a study session on Scrum. I think it was very useful. All of the members were determined to start using Scrum, and they all learned the basics. Setting Up Scrum Events The week after the study session, we set up Scrum events. We were fumbling at first, so the Scrum Master and Product Owner took the lead in discussing what to do at each event. First we did tasks like the following. We set up a two-week Sprint The Product Owner created a story Start a daily Scrum of 15 minutes every day Set up Sprint planning Set up Sprint reviews Set up Sprint retrospectives The Scrum Master took the lead in setting up the above events and moderating them. This article will not go into the details of the Scrum events, but we felt like we were actively taking a Scrum approach when we took concrete actions such as: the Scrum Master taking the lead in setting up events, and the Scrum Master and Product Owner working closely together to hold events I found it important in the process to have a passionate Scrum Master pushing the team, along with a team willing to improve; especially during the initial stages when we were still finding our footing. Team Building Through Scrum development, I observed our team's gradual growth overtime. Especially in the following areas: It became easier to discuss specifications Team members actively exchanged opinions regarding the functionalities of the app We can now practice Scrum without solely depending on the Scrum Master I think this is a common experience for many organizations, but Scrum did not initially function effectively for us at the beginning. The first month we formed a Scrum team, our Product Owner Saito and the Scrum Master Koyama played a central role in searching for ways to improve the team. It took about two months to use Scrum smoothly. The Product Owner's Role and Concerns In general, the role of the Product Owner in Scrum development is to manage the development requirements through a product backlog and maximize the value of the product by defining the development direction. It’s a crucial role in Prism Japan as it is in many organizations, and a not an easy one to make tangible decisions to reach this vague concept that is to maximize the value of a product. At first, there were endless concerns over whether their decision was the correct one, or if the user really needed a certain functionality. They took two decisive actions to address their issues, and as a result, they can now make decisions based on an established criteria. Step ①: Redefine user issues that Prism Japan should solve All Scrum team members took part in a workshop based on the Jobs Theory framework. I won't go into the details of the theory, but this allowed us to define the essence of the value that Prism Japan should deliver to its users. Step ②: Implementing data-driven decision making Since our Product Owner has experience as a data engineer and data analyst, he designs user logs, visualizes application usage, and makes analyses based on problem hypotheses. This made it possible for us to incorporate app-related issues into our development policies, while assessing the acceptance of released features by users. The Story of An Issue with Scrum and How We Solved It Finally, I will talk about an issue we had with developing with Scrum and how we solved it. The Issue: the Product Owner and the engineers did not agree on what they wanted to do When we first started developing with Scrum, we struggled to communicate requirements and specifications to each other accurately and at the right time. Various factors contributed to the challenges, including the delay in finalizing specific specifications until development started, or the fact that we relied on verbal coordination, leading to discrepancies in how each team member interpreted the information. The Solution: Use sprint refinement and sprint planning Courtesy is necessary for a good relationship, even with Scrum. Changing the timing and manner of the requests and properly incorporating them into the Scrum events was very effective. Sprint Refinement We also conducted Refinement sessions the week before each Sprint. The Product Owner explains the User Story, agrees to the requirements, and makes a quick quote. The Product Owner has to decide on the User Story they want to address in the next sprint in advance. Sprint Planning Beginning by establishing the priority of User Stories and tasks, we then reach a consensus on the goals of each Sprint. That ensures that the work is feasible in light of past performance, giving engineers accountability and confidence in what they are doing during the Sprint. The Impact of Implementing Scrum Although there were some minor issues using Scrum in the beginning, overall, it brought positive changes to the team. I will look back on what issues we had before we started using Scrum and what benefits it brought. Since the development process was not shared between the frontend and backend teams, there was no cross-team consensus. Now, we are able to understand what both parties are working on at the Daily Scrum. In addition, during refinement, planning and other events, we had many discussions on backend implementation policy based on the requests from the frontend side, and we started development smoothly and avoided a lot of detailed rework. There was a lack improvements that considered both frontend and backend aspects. I think discussions have improved a lot since engineers participate in them with a sense of autonomy, responsibility, and desire to improve the app. We now have discussions at the architecture level considering performance and future scalability, and we can organize things that we would not even discuss if the teams were divided. Members who don’t work with Project Managers closely were less likely to have ownership because they developed on a per-request basis Instead of working only on specific requests, as they work on User Stories, engineers now engage in consideration, discussions, and even proposals to devise the best approach for accomplishing tasks in the most effective manner. Not only did we improve the development, but I felt that many team members grew with each Sprint. Conclusion The development team and the planning/ operation team share a common purpose and work together to make improvements. I hope this article could serve as a reference for those who are developing with Agile Scrum development and those who are just starting! Prism Japan was just released a little over a year ago, and since then, the app has experience growth and attracted an increasing number of members. Feel free to try the app and witness firsthand how it has evolved through our development system! For those who want to try Prism Japan You can install Prism Japan through the link below. iOS: App Store Android: Google Play
アバター
Hello. I am @hoshino from the DBRE team. The Database Reliability Engineering (DBRE) team operates as a cross-functional organization, tackling database-related challenges and building platforms that balance organizational agility with effective governance. Database Reliability Engineering (DBRE) is a relatively new concept, and only a few companies have established dedicated DBRE organizations. Among those that do, their approaches and philosophies often differ, making DBRE a dynamic and continually evolving field. For information on the background of the establishment of the DBRE team at our company and the team's role, please refer to our tech blog, " The Need for DBRE at KTC. ” This article discusses an issue encountered during the migration from Amazon Aurora MySQL 2 to Amazon Aurora MySQL 3, where the execution of the mysqldump command terminates unexpectedly without displaying an error message. I hope this proves helpful. The root cause of the error Let's start by explaining the cause of the error. The process terminated without an error message in this case because the collation set in the trigger of the Amazon Aurora MySQL 2 database was utf8mb4_0900_ai_ci , which is not supported in MySQL 5. As a result, mysqldump was unable to recognize it. The investigation process that led to identifying the root cause and determining the solution will be explained in detail below. The phenomenon that occurred When executing the mysqldump command directly to export data from Aurora MySQL 2, the process unexpectedly terminated without generating an error message. After executing the command, I checked the exit code, and 2 ( Internal Error ) was returned. It was evident that an error had occurred, but the exact cause could not be determined. $ mysqldump --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 2 Cause investigation. To determine the root cause of the issue, I followed these steps. First, I examined the behavior by running a different version of the mysqldump command. This time, I am utilizing the mysqldump command from the MySQL 5.7 series for Aurora MySQL 2. $ mysqldump --version mysqldump Ver 10.13 Distrib 5.7.40, for linux-glibc2.12 (x86_64) I attempted to perform the export using the mysqldump command from MySQL 8. $ mysqldump80 --version mysqldump Ver 8.0.31 for Linux on x86_64 (MySQL Community Server - GPL) $ mysqldump80 --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 0 The result was successful. This indicates that the error might be caused by a version difference in MySQL. Furthermore, to investigate the possibility that the mysqldump command itself might be causing an internal error, I tested various options to determine whether any error messages appeared. The result showed that adding the --skip-triggers option prevents the error. $ mysqldump --defaults-extra-file=/tmp/sample.cnf --skip-triggers > sample.sql $ echo $? 0 The result suggests that the error occurs in the trigger-related part. So, I checked the trigger settings. mysql> SHOW TRIGGERS FROM sample_database \G *************************** 1. row *************************** Trigger: sample_trigger Event: UPDATE Table: sample_table Statement: BEGIN SET NEW.`lock_version` = OLD.`lock_version` + 1; END Timing: BEFORE Created: 2024-10-04 01:06:38.17 sql_mode: STRICT_TRANS_TABLES Definer: sample-user@% character_set_client: utf8mb4 collation_connection: utf8mb4_general_ci Database Collation: utf8mb4_0900_ai_ci *************************** 2. row *************************** (The rest is omitted) Here, I noticed that the database collation was set to utf8mb4_0900_ai_ci . This is a collation that is not recognized by MySQL 5. I modified the trigger definition of the table where the error occurred to utf8mb4_general_ci and then executed the mysqldump command again. mysql> SHOW TRIGGERS FROM kinto_terms_tool \G *************************** 1. row *************************** Trigger: sample_trigger Event: UPDATE Table: sample_table Statement: BEGIN SET NEW.`lock_version` = OLD.`lock_version` + 1; END Timing: BEFORE Created: 2024-10-04 01:06:38.17 sql_mode: STRICT_TRANS_TABLES Definer: sample-user@% character_set_client: utf8mb4 collation_connection: utf8mb4_general_ci Database Collation: utf8mb4_general_ci *************************** 2. row *************************** (The rest is omitted) $ mysqldump --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 0 The mysqldump command was successful. This difference in collation also explains the successful execution of commands in MySQL 8. This investigation revealed that the mysqldump failed because the database collation set in the trigger was utf8mb4_0900_ai_ci , which does not exist in MySQL 5. Relationship between Amazon Aurora MySQL 2 and MySQL 5.7 Amazon Aurora MySQL 2 is based on MySQL 5.7, but the two are not entirely identical. AWS has added its own extension functions to Aurora, incorporating some features from MySQL 8.0, such as the utf8mb4_0900_ai_ci collation sequence, which was the root cause of the problem. When I try to specify utf8mb4_0900_ai_ci as a collation in MySQL 5.7, the following error occurs: mysql> ALTER DATABASE sample_database CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; ERROR 1273 (HY000): Unknown collation: 'utf8mb4_0900_ai_ci' On the other hand, the same command is executed normally in Aurora MySQL 2. mysql> ALTER DATABASE sample_database CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; Query OK, 1 row affected (0.03 sec) mysql> SHOW CREATE DATABASE sample_database; +------------------+---------------------------------------------------------------------------------------------------------+ | Database | Create Database | +------------------+---------------------------------------------------------------------------------------------------------+ | sample_database | CREATE DATABASE `sample_database` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ | +------------------+---------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) Further investigation To determine if the error was solely caused by the trigger, I examined other MySQL objects as well. In an Aurora MySQL 2 environment, I created a view with the collation set to utf8mb4_0900_ai_ci and observed its behavior during the dump process. CREATE VIEW customer_view AS SELECT customer_name COLLATE utf8mb4_0900_ai_ci AS sorted_name, address FROM customers; When I run the mysqldump command, it succeeds without any errors. $ mysqldump --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 0 Next, I conducted the same test with a stored procedure in the Aurora MySQL 2 environment. DELIMITER // CREATE PROCEDURE sample_procedure() BEGIN DECLARE customer_name VARCHAR(255); -- String manipulation with specified collation SET customer_name = (SELECT name COLLATE utf8mb4_0900_ai_ci FROM customers WHERE id = 1); -- Comparison using collation IF customer_name COLLATE utf8mb4_0900_ai_ci = 'sample' THEN SELECT 'Match found!'; ELSE SELECT 'No match.'; END IF; END // DELIMITER ; In this case as well, the dump completes successfully without any issues. $ mysqldump --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 0 Aurora MySQL 2 allows you to use the collation utf8mb4_0900_ai_ci , which does not exist in MySQL 5. However, I discovered that when the mysqldump command is based on MySQL 5, it fails to recognize this collation, resulting in errors, particularly in trigger-related sections. Since the issue does not occur with views or stored procedures, I suspect that the problem is related to how collation is handled in triggers. Solution The problem in question occurred because the collation of the database set in the trigger was utf8mb4_0900_ai_ci , which is not supported in MySQL 5. To address the error, I changed the database collation to utf8mb4_general_ci and reconfigured the trigger. This enables the mysqldump command in MySQL 5.7 to correctly recognize the collation, allowing the export to be performed successfully. ALTER DATABASE sample_database CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci; -- Recreate the trigger if necessary Another solution is to use the mysqldump command for MySQL 8.0. MySQL 8.0 clients can recognize utf8mb4_0900_ai_ci , so it is possible to export without changing the collation of the database. $ mysqldump80 --defaults-extra-file=/tmp/sample.cnf > sample.sql $ echo $? 0 In certain situations, changing the client version may not be feasible due to environmental or other dependency constraints. Conclusion In this case, the mysqldump command terminated without displaying any error messages, and it was only by checking the exit code that I discovered an error had occurred. If the process concludes without an error message like this, there is a risk of unknowingly exporting or importing incomplete data. Therefore, when backing up or migrating a database, it is crucial to verify the processing results, such as by checking the exit code. Aurora MySQL 2 has already reached its End of Life (EOL) and is no longer supported. Please be mindful if you still have any environments running on Aurora MySQL 2 and are planning a migration
アバター
Introduction Hello, I am Kang from KINTO Technologies' development head office. I joined the company in January 2022 and have been working on the Website Restructuring Project since then. As KINTO increases its customer base, we are trying to expand various services along the way. The existing website had various issues with scalability when it came to incorporating new functions, so the website restructuring project was started in August of this year to solve these problems and make improvements. In this article, I will talk about the front-end (FE) team behind the Website Restructuring project, of which I am a part of. Website Restructuring FE Team's Goals To create an environment that facilitates new development quickly To change a complex environment into a simpler one To Redefine css and js sources that are intricately interconnected Transfer membership management functions to the member information platform Work with the Creative Group to implement designs effectively (including UI/UX improvement suggestions) The Restructuring FE team worked to accomplish the above goals on the project. During the development phase, we discussed items that could be improved at each sprint meeting and worked with other teams to improve the project and make it more efficient. We updated the code for greater reusability and intuitiveness, making maintenance easier. Using TypeScript prevented unexpected errors, made debugging easy, and improved work productivity. Technological Specs We Incorporated into the Project Our Concerns During the Project In addition to restructuring the specifications of the existing project, we also improved existing functions, added new ones, and refactored code, so it was difficult to measure work in progress amongst us. We struggled with ways to efficiently communicate specifications and development with collaborating teams (BE team, Design team, Infrastructure team, QA team, etc.) How to build a good development environment Creating code that is easy to maintain and reuse How to clear misunderstandings between team members regarding the project's specifications and technology and make code more integrated In order to solve these issues: As development went on, we recognized the importance of reviews, so we decided to create rules for them. We also made every effort to make time for them every day. We decided not to designate a specific person in charge for reviews, and created a system where anyone could freely review different Pull Requests. As a result, it became easier to keep track of each other's work and check on task specifications and components that other team members were working on. We made a Confluence page on changes to specifications with other teams and used it to communicate with them, and we managed to communicate efficiently using other tools that were available for us in the company. The Confluence page contained changes to specifications on the API and swagger provided by the BE Team. It allowed us to quickly understand API specifications and clearly check and address updates. We collaborated with the Creative Team using design tools such as Adobe XD and Figma. With them, we were able to gain a better understanding of the UI/UX to be implemented, allowing us to create components that are not only easier to understand, but also user-friendly and intuitive. We maintained an open communication as we collaborated, and if there were any unclear points or changes, we addressed them quickly through huddle meetings for quick resolution. As a result, we were able to minimize the number of bugs that could occur during the development process. In order to create a better development environment, we made various work guidelines for the team. We communicated using Outlook, Slack, Confluence, and other tools, and we were able to understand each other’s current work situation by having daily meetings. While working on tasks, we actively discussed each other's concerns and collaboratively addressed problems within the team At planning meetings, we checked each team member's progress and divided work for each sprint to prevent excessive workload. We also held retrospectives after each sprint and talked with each other about what we regretted, what was good, and what we wanted to improve in order to build a better development environment. We used the Atomic Design pattern to improve code reusability through continuous refactoring. We consolidated our definitions for Atom, Molecules, and Organisms in a Confluence workspace and shared awareness among team members through meetings. As we developed new features and UIs, we were also able to make independent and pure components. We also tested Storybook and React Jest to ensure the quality of the components. As a result, we were able to create components with higher reusability. We consolidated and shared the specifications for existing projects and new features that were going to be added to the Confluence workspace of the project. We also created reference materials on the FE development environment to enable new participants to adapt more quickly to the development environment. We set rules for code management, on topics such as review flow, branch operation, and how to write code, in order to create consistent code. We took turns to hold book discussion sessions to share knowledge about the technologies each of us were currently using. Summary Through this project, I was able to look back into what is important for a good project. I think each programmer's performance is also an important factor, but I think it is important to constantly communicate as a team. I felt that it was important to work together to create a better development environment within the team and try various things with a flexible way of thinking without fear of failure. In the Restructuring Team, we were able to develop an environment with a positive atmosphere where anyone could freely say and try anything. I think that through this environment and experience, we were able to grow both as a team and individually. Successfully releasing the project in August was made possible thanks to the effort of all the teams within the Restructuring Team, who persevered through the challenges for an extended period. Thank you for reading to the end.
アバター
Introduction I'm Kobayashi, a Product Manager (PdM) for internal systems at KINTO Technologies. After joining the company, I was assigned to the website restructuring project of KINTO ONE ( [KINTO] New Vehicle Subscription from Toyota | Full Service Leasing (kinto-jp.com) ), and was in charge of project manager, test promotion, and migration promotion. When I was assigned to the project, it was already a year into development, and we were at the stage of conducting integration tests and releases. However, we encountered a number of challenges afterwards. In this article, I would like to introduce some of the challenges I first encountered as a person in charge of test promotion. What is the Website Restructuring Project? It’s the name of an in-house development project to renew our KINTO ONE New Vehicle subscription e-commerce site, our main KINTO service in Japan. The goal was to improve development productivity through a 360 review of its architecture and data structures. There were more than 20 people involved from different products and services, and a total of about 50 window persons from the business and development sides. First, Grasp the Situation I joined KINTO Technologies in February 2022. The first step was to participate in the regular weekly meetings, gaining an understanding of the situation and taking a stance on implementing testing and transition promotion. The internal integration test was started in mid-January 2022, shortly before I joined the company. It was reported that the project was on track, with a slight delay of a few days. At this point, there was no incongruity in the schedule and the internal integration test was scheduled for completion at the end of May 2022. It was an ideal situation where developers created the internal integration test plan, and a test team of non-developers was responsible for writing and executing test cases. Initial Challenges The internal integration test was planned to be divided into 7 phases, but the progress began to slow down in late February, and the first report in March was delayed by a week. At that time, there were more than 10 bugs that affected the subsequent testing phase. It became apparent that it was a difficult situation where it would take about a week just to fix them. To confirm if we could proceed as planned, we interviewed the developers about the situation, and the following comments came up. Not aware of the completion conditions for unit test Not aware of start conditions for integration test The content of the internal integration test differed from expectations Although the document compiled by the developers describes what kind of tests to be conducted, there is certainly no description of the start and completion conditions. That kind of information is being sought in written policy and plans, but it cannot be found. Well, it made sense. Regarding the expected test content, the document compiled by the developers stated that the test would be conducted through a series of screen transitions, and the test items were described as follows. Is the screen display and transitions correct when the correct data is entered? Are records correctly created in the DB? Is it possible to recover data when the process is interrupted in the middle? From this content, it seems that story-based testing is assumed, aligning well with the test cases prepared by the test team. Consequently, we had to consider what caused the confusion. When you take a look at the documentation compiled by the developers, you will notice the following description about how to run the tests. Conduct testing on each browser (Same as screen test) For the database, directly check the DB values using SQL Stated that it is the same as the screen test conducted in the unit test. In this description, it appears that same tests as the unit test are conducted by integrating the front end and back end. I somehow understood the cause of the confusion. This challenge emphasizes the importance of planning without inconsistencies in understanding. Addressing Challenges The response to this challenge was to stop the internal integration test for 2 weeks, since forcing the test to proceed without stopping could risk further delays. This response had the following effects: Recovery of the Development Side is Possible Stopping the test allowed time to be used for correction and delayed recovery. It also increases quality because it allows us to focus on correction. Developers and the test team are freed from the stress of not progressing testing. Quality Control Policy and Quality Evaluation Criteria Can be Organized Stopping the test made it possible to organize the policy and criteria, not only for the internal integration test, but also for the entire test, enabling the dissemination of the following information. Define the quality to be ensured in each test phase as a quality control policy Define quality evaluation criteria in each test phase Although it was a postscript policy and criteria, the developers accepted it. This flexibility is one of our strengths. The internal integration test began to run successfully. What To Do After Various challenges will continue to arise thereafter. When schedule changes occurred due to other projects, we added quality enhancement tests and revised the plan to improve quality according to the situation, while conducting external integration tests, follow-up intake of other projects, and QA testing. As a result, we successfully released it in August 2023. It is said that, given its size, there are few post-release bugs. Nevertheless, when a bug occurs, it can significantly impact business operations. Therefore, I would like to continue exploring what I can do to improve quality. Conclusion Based on this experience, I believe the following 2 points are important. Plan without inconsistencies in understanding Establishment of quality control policy and quality evaluation criteria I thought that even if it is left to the developers, it's essential to have plan, policy, and criteria in place as a guidepost in case something happens. Also, what I realized while responding to the issue this time is that we never gave work instructions to the developers. In the website restructuring project, the policy was to leave everything from detailed design to internal integration test to the developers. This stands as a project policy that I wanted to uphold. I think I managed to do so. Well, these were some of the initial challenges I encountered in the website restructuring project.
アバター
Introduction My name is Rina ( @chimrindayo ) and I’m involved in development and operation of Mobility Market and operation of Tech Blog at KINTO Technologies. I mainly work as a frontend engineer using Next.js. I'm excited that Oden season is here🍢 and this year I’m looking forward to Tomato Oden! 🤤 At KINTO Technologies, we do our best to provide company-wide support for the output of acquired knowledge and skills , such as presenting at external events and posting on the Tech Blog. In this article, I will introduce what we do before a Tech Blog article is released, the process of publication, and our efforts to promote output at each step of the process! The Tech Blog Project First things first, I would like to introduce our team, the Tech Blog Project of KINTO Technologies. We aim to promote the input and output of employee knowledge, starting with the operation of the Tech Blog. There are 8 members in the team, all of us holding concurrent positions. While working as a product manager or an engineer on other projects, we are always in the pursuit of fun ways to create output! https://www.wantedly.com/companies/company_7864825/post_articles/510568 The initiatives I'm about to introduce are part of our ongoing efforts to promote the Tech Blog Project! The Tech Blog Publishing Flow Our publishing flow is divided into 3 main phases. 1. Writing The first phase of the flow. The writing phase involves researching information, deciding on a theme, developing a plot, and composing the article. 2. Review The second is the review phase of the written content. At KINTO Technologies, we conduct a 3-step review to check for typographical errors and ensure the content of articles. 3. Release The third one is the phase of releasing articles. We translate the articles and release them via GitHub. Now, let me show you what kind of support we provide in each phase and what we are actually working on! Until the Tech Blog Article is Published First of all, I'd like to introduce you to the efforts of the Tech Blog team during the writing phase. Consultation Desk During the writing period of the Advent Calendar, us Tech Blog team members take turns to be on call for an hour each day in a huddle meeting on a Slack channel, creating a system where writers could ask questions freely and clarify any doubts in their process. Up to 5-6 people would join the huddle per day, and was used as a place and time to solve minor issues in writing or ask how to convert into markdown format, etc. In fact, we received comments such as "I recommended the Consultation Desk to others!" It was more well received than I expected. ✨ Interviews with Writers For those who think "I can't come up with a story, but I want to write something!" or "I have a story, but I'm not sure how I can make it into a good article," the Tech Blog team is available for interviews with you. The interview takes about 30 minutes and focuses on what kind of work you've been doing since joining the company, as well as how you solve problems and issues in each task. Moreover, based on the results of the interview, we will even make a plot of the article within the interview time. Through the interviews, we aim to bring out the best in everyone who is worried about writing articles, by lowering the hurdle and by reminding them how valuable the work they do every day is. In addition, the content of the article is focused on their relevant work. We ensure to promote everyone’s output, not only creating what is commonly referred to as tech content but also the output of skills from support functions, such as management and office environment, ensuring that all team members contributing to the maximization of technology also bring valuable output. Review After the articles are written, we go through a 3-step review from different perspectives. The goal is to ensure the quality of the article through a 3-step review and to create an article easy to understand from a readers point of view. We also emphasize expressing gratitude to the writer when conducting reviews. Content Review First, we conduct a content review to ensure the accuracy of the article's content. This review is conducted by the writer’s team members or managers, mainly in the below terms. Most articles undergo reviews by 2 to 3 or more reviewers, who provide feedback on suggestions for more reader-friendly expressions and praising its good points as well! Review Perspectives ・ Verify the accuracy of the content as a Subject Matter Expert ・ Check for confidential information The Tech Blog Team Review Next is a review by the Tech Blog Team. We check from the following perspectives: While valuing the tone and voice of the writers, we suggest reader-friendly sentences and article structure, and check for orthotypographical errors such as proper use of prepositions. We also strive to see the article from the reader's point of view, suggesting to add more context in some cases. Review Perspectives ・ Ortotypographical errors ・ Copyright While I mostly review content as a Tech Blog team member, it's interesting to learn and discover new ideas as a reader, such as "JIRA can track the deployment history of GitHub!", and it makes me want to try out these review guidelines in my own projects as well. The CIO Review The final review is from our CIO, Mr. Kageyama . All articles are reviewed by him, and upon his approval, the article is ready for release. I personally believe that this review process helps writers release articles with confidence. Release After all reviews are completed, we make final adjustments for the release. Here, I'd like to introduce "Translation of Articles," where we place particular emphasis on! The Translation of Articles About 25% of employees at KINTO Technologies are non-Japanese (as of November 2023). Therefore, some of our members want to write in their native language or are not too confident writing in Japanese. To meet their needs, writers can choose whether to write in either language and all articles are translated from Japanese to English or vice versa. A Language Service Provider (LSP), an external subcontractor, help us in the delivery of the base translation and the final LQA work is performed internally. LQA stands for Linguistic Quality Assurance. A text composed entirely from an external point view may inevitably lack the accuracy and context to convey the the author's original intent. These adjustment in expressions or spelling errors are checked during the LQA step. (Reference: Proactive Engagement of Foreign Employees ) Conclusion In this article, I have introduced the initiatives undertaken to promote output before the release of the Tech Blog to the public. I would like to continue to make improvements that contribute to a more effective work environment for our colleagues! I also hope that the content we share in the KINTO Technologies Tech Blog is helpful to you. Finally, I would be delighted to exchange ideas with you regarding the operation of Tech Blog and about technical PR! Please feel free to contact me through X with any comments you may have 🕊 https://twitter.com/KintoTech_Dev
アバター
Introduction Hello, I'm Risako from the Project Promotion Group of KINTO Technologies. I usually engage in various projects in the role of a Project Manager (PjM). In my previous article, I talked about my projects and what PjMs do at KINTO Technologies: How to Start a Cross-Divisional Project and Introduction to PjM Work . Feel free to take a look if you are interested. To provide you with a brief introduction to PjMs at KINTO Technologies: each product has basically its own development group or team, and is led by the Product Manager in each team. For projects that cross product lines, such as launching new services, or for projects that are large in scale even if they are not cross-divisional, a PjM will be assigned on the role of initiating and overseeing the project. ![](/assets/blog/authors/risako.n/1.png =500x) What we call projects include those related to KINTO's existing services such as KINTO ONE (vehicle subscriptions) or KINTO FACTORY, KINTO Technologies' owned media and services, and even new businesses launches under the KINTO brand, along with an array of diverse projects that need to be taken care of. New projects emerge daily and many may even start without clearly defined goals. Every time I embark on new projects, I feel the importance of the ability to move projects forward amid uncertainties, and well as the importance to advance myself too! In this article, I will share my thoughts on the theme of "the ability to move forward" and how that connects to the ability to be a “self-reliant person" ( Jiso-ryoku ). ![](/assets/blog/authors/risako.n/2.png =500x) What is Self-Reliance? Sorry that the intro became a bit lengthy. Now, what is self-reliance? We hear that word a lot in recent years (maybe especially in the job market). I think people have a general sense of what the word means, but it's not entirely clear to everyone. I looked up the definition and found that the term "self-reliance" ( Jiso-ryoku ) does not seem to have a precise definition, but Kotobank provides one under "self-relying." Running on its own power, not relying on the strength of others In other words, it refers to the ability to run (to progress or operate) through one's own capabilities. In the context of work, it can be expressed as "the ability to move forward with a task (and complete it under any circumstances) by one's own thoughts and actions." The part in the parentheses would be perfect if we could do that much! That would be ideal. ![](/assets/blog/authors/risako.n/3.png =250x) Self-reliance = running on your own power! So What Kind of Person Is a Self-Reliant One? I'd like to take a step further and ask, "what defines a self-reliant person?" Someone capable of advancing tasks even when goals are not clear. Someone who can move forward in the absence of defined methods. Someone who can create output from their own ideas, rather than simply imitating others. On the other hand, a person who is not self-reliant is the following (the opposite of people who can move forward by themselves!): Someone who can't work without instructions. Someone who assumes they can't do what they don't know about. Someone who only does what they are told to do. Notice that I wrote "move forward" or "advancing" for "a self-reliant person." But don’t get me wrong, I think that just self-propelling oneself left and right is not good either. "Someone who can properly move forward" is the really self-reliant one! ![](/assets/blog/authors/risako.n/4.png =250x) Don't run wild! Stay in control! How Can I Become a Self-Reliant Person? If there's a sure-fire way to become self-reliant, I’d like to learn about it. In the meantime, let me share what I try to keep in mind when I work on different projects. Value Dialogue When two people work together for the first time, it is natural that there will be gaps between them in many areas. Assumptions are easy to make, so we need to be careful to not presume things or make premature judgments. Instead, share your thoughts with each other. Relationships are formed by sharing ideas. Create Small, Output Small, and Value Feedback Under circumstances where there’s a lot of uncertainty, it’s normal to be afraid of creating deliverables or being assertive when communicating with the team. Start with small outputs to slowly bridge the gap with those around you. Don't view the opinions you receive as ineffectiveness on your part, but rather as feedback (do not take it negatively, but positively). Ensure that The Definition of Done is aligned Since the Definition of Done may vary from person to person, make sure the understanding is the same among all stakeholders. For example, imagine there’s a person assigned to review a team's operation and finishes it by themselves, adding just a manual change and closing it without consultation. But someone else in the team with different expectations, may have actually wanted this person to come back to the team to share what they did and ask for feedback (what is considered normal for you may not be the same for others). Do Not Worry Too Much about Unnecessary Problems Worry about them when the time is right (don’t stress over things that aren’t worth your energy now). It often happens that, even after reflecting very thoroughly, situations evolve differently when the need arises (in which case all that hard thinking was for nothing in the end). What is Value? Be aware of the purpose of the work, for whom and for what the end product is for, and the potential outcomes it will bring (as there is a bad tendency that the means become the purpose of it). When a change request arises during development, developers tends to think that a change mid-process is difficult. However, by considering the value and purpose of the project, they can be more easily convinced to embrace changes in a more positive manner, leading to a more satisfying outcome. Be aware of the meaning and the value you bring (as just doing what you are told doesn’t bring anything.) Be Aware of What You Can Do It's easy to see what you can't do, but be aware of what you can do (your value). For example, think about what has changed (or what you have accomplished) since you started. Identify new capabilities and understand what you can do now that you couldn't before. Learn From Others Be Aware of What You Like about the People around You! Being aware of what you like in others can sometimes bring you a little closer to what you consider good. For example, Mr. XX doesn't talk much, but the materials he makes are very easy to understand! What about them are easy to understand? If you look at it from that perspective, you may find tips on how to improve yourself. This list could go on and on, and I realized it’s somewhat becoming like a textbook... But to some extent, by trying to break down to fundamentals it might end up feeling a bit like a textbook... (For those of you reading, I hope there is at least 1 on the list that caught your eye and mind.) Self-Reliance and The Agile Mindset Some of you might have noticed a similarity to the Agile mindset in many of the things I've talked about today In order to form self-organizing teams rooted in Agile principles, I interpret that each person should be able to work autonomously and be self-sufficient. The foundation of this abilities can be found in Agile and Scrum, and I think these concepts were instilled in me through my previous experiences. The best architectures, requirements, and designs emerge from self-organizing teams. At KINTO Technologies, some groups adopt Scrum depending on the product, while others have a more Waterfall-type approach. However, regardless of approach, I think the overall mindset of KINTO Technologies is Agile (creating value in small increments and making iterative improvements with emphasis on dialogue and cooperation). If you are interested in working in an Agile mindset environment, we would be happy to have you join KINTO Technologies. Also, of course, if you want to demonstrate your self-reliance, you can do so to your heart's content at KINTO Technologies! We look forward to welcoming you. Conclusion Unfortunately, I barely gave any examples to give you an idea of what KINTO Technologies is really like, but I have primarily talked about conceptual issues. However, I would be delighted if this article could: provide examples for those who wonder what self-reliance is and be an opportunity for you to think about your own self-reliance.
アバター
はじめに こんにちは、11月入社の鈴木です! 本記事では2023年11月入社のみなさまに、入社直後の感想をお伺いし、まとめてみました。 KINTOテクノロジーズに興味のある方、そして、今回参加下さったメンバーへの振り返りとして有益なコンテンツになればいいなと思います! 白井 自己紹介 8月入社のプラットフォームGの白井です。AWSのインフラ設計・構築などを行なっています。入社エントリが面白そうだなー、と思い参加させていただいています! 所属チームはどんな体制ですか? Osaka Tech Labに2名。神保町オフィスに5名の7名体制となっています。 KINTOテクノロジーズ(以下KTC)へ入社したときの第一印象?ギャップはありましたか? フルリモート環境から基本出社(週1~2は在宅)の環境に変わったので、少し戸惑う部分はありました。一方で、今では出社した方が話し合いがしやすいなと思ったので、プラスの印象です。 みなさん技術力が強いなという印象でした。私が元々インフラをずっと触っているというわけではなかったのかも知れませんが、最初は話を理解するのにも精一杯でした。 現場の雰囲気はどんな感じですか? とてもアットホームです!基本出社しているのでTeamのメンバーに相談することがすぐできて助かっています。また、gatherを導入しており、在宅勤務時の時にも気軽に相談したい相手のところに行って聞くことができます。気軽に相談できるようになっている理由としては、アットホームな感じと、仕事外のことも良い感じに話し合える仲であるところかと思います。 ブログを書くことになってどう思いましたか? 何事も挑戦だなーと思いました。KINTO TechBlogには良い記事がたくさんあると思っているので、その足がかりとしてはとても良いことだと思っています。実は私はすでに「 CloudFront FunctionsのDeployのプロセスと運用カイゼン 」というタイトルでアドベントカレンダーで執筆しているので、是非見てみてください! 11月入社の同期から他部署のメンバーへ質問 会社内で同じ趣味を持つ人たちが集まるようなクラブや同好会はありますか?もしあれば、白井さんはどんな部に参加されていますか? たくさんあります! Tech Blog でも運動系のクラブが紹介されていました! 私が参加しているのだと、紹介されていませんがランニングサークル(RUN TO)、e-sports部です! AKD 自己紹介 コーポレートITG Operation Processチーム、同期の中で唯一のOsaka Tech Lab所属AKDです。コーポレートエンジニアをやっています。いわゆる情シスです。 所属チームはどんな体制ですか? 当社のPCや各SaaSに関するオン/オフボーディングや各プロセスの可視化や改善を4名体制で担っています。 KTCへ入社したときの第一印象?ギャップはありましたか? エンジニアの会社だし、そんなにコミュニケーションはないのかも…と思ってましたが定期的に勉強会や部長会議事録を読む会などコミュニケーションの機会は多々あり、そこがいいギャップでした。 現場の雰囲気はどんな感じですか? 皆さん、いい意味で遠慮することなく、お互いを尊重した関係性を築いているように感じています。 コーポレートITGは室町・神保町・名古屋・大阪にメンバーが在籍しており、またチームも5つあって大所帯ではありますがコミュニケーション用の常設Zoomがあり、そこで拠点やチームを超えた会話が行われていてよい空気が流れているように思います。 ブログを書くことになってどう思いましたか? 入社エントリってみたことあるけど、選ばれた人が書くのかと思いきや全員書くのか!と思いました。あとシンプルに中途だけど、同期感があって好きです。 11月入社の同期から他部署のメンバーへ質問 1ヶ月経って感じた、Osaka Tech Labの雰囲気を教えてください。 包容力が高い拠点で出張されてくる方、新入社員の方、誰でもwelcomeな雰囲気があります。 SSU 自己紹介 KINTO ONE開発GのSSUです。ディレクターとして、トヨタの販売店に対するDX支援開発のディレクションを担当しています。 所属チームはどんな体制ですか? オウンドメディア&インキュベートGのDX Planningチームに所属しています。トヨタ販売店の中でのボトルネックをIT の力で解消し、お客様へ幅広いモビリティの選択肢を届けるということが チームミッションです。プロデューサー2名、ディレクター3名、デザイナー2名の計7名で業務にあたっています。 KTCへ入社したときの第一印象?ギャップはありましたか? 自動車業界というイメージよりも若い人が多く、自由度も高いというのが第一印象です。 現場の雰囲気はどんな感じですか? まだ入社1ヶ月ですが、DX Planningチームは個性豊かでみなさんそれぞれ違うなという感じです。一緒に案件を進める中で、MTGや個別にコミュニケーションをとると、この違いによって自分が気づけないことに気づけるのでチームの強みだなと思います。 ブログを書くことになってどう思いましたか? 人生で初めてのブログが、とうとうきたか、と思いました。 11月入社の同期から他部署のメンバーへ質問 KTCのSlackワークスペースで一番好きな絵文字を教えてください。 キノコがすごい顔で走っている生き急いでる絵文字が好きです。 kiki 自己紹介 人事採用Gのkikiです。採用業務とテックブログ運用PJTにも参加しています。 所属チームはどんな体制ですか? 採用チームは現在(2023年12月時点)私含め6名です。個性豊かなメンバーがいて、全員が全員の業務に関心を持ちながら切磋琢磨しあいながら日々採用業務に携わっています。 KTCへ入社したときの第一印象?ギャップはありましたか? 思った以上にフラットでオープンだと思いました。人事という立場もあるかもしれませんが、誰が言ったからという理由で議論が進むことは少なく、妥当性があるかやチームの動き方として「今の最善は何か」という観点で仕事を進めることが多いように感じます。 入社2週目でOsaka Tech Labの情報共有会に参加させて頂く等、すぐに仲間のように迎えてくれて温かい人が多いな、という印象です! 現場の雰囲気はどんな感じですか? 話しやすい空間を作るために、敢えて雑談を挟んだりすることも多く組織や人の状況に常にアンテナを張れる環境です。入社したばかりだからといって、最初の2週間位はかなり遠慮していたのですが、採用業務は完全に未経験という訳ではないため、気づいたことや入社したばかりの新参者だからこそ、「ここどうなってるの?」という疑問については都度都度議論しやすい空気感で、その点は有難いです。 ブログを書くことになってどう思いましたか? シンプルに「嬉しい!」が感想です。テックブログ運用PJTのメンバーも入社初月から関わらせてもらっています。外部発信は積極的に行ってきていないので、問題発言をしていないか、気にしてしまうけれど文章を書くことは好きなので良い実験場という認識をもっています。 11月入社の同期から他部署のメンバーへ質問 ストレス発散方法を教えてください。 普段そこまで聴かないロックを聴いて発散します。Franz Ferdinand、夜の本気ダンスが特に良いです。また、自宅で変なダンスをすると発散できると何かの記事で読んでからは、人目につかないようなら自宅等では踊るようにしています。(めっちゃおススメです!) Y.Suzuki 自己紹介 プロジェクト推進Gの鈴木です。 KINTO FACTORYのフロントエンドエンジニアを担当しています。 所属チームはどんな体制ですか? 業務委託の方や部署を兼務されている方はいらっしゃるもののマネジメントから実装までKTCのメンバーで構成されたチームです。 その中でフロントエンドは12月にも新たなメンバーが増え6名体制になっています。 KTCへ入社したときの第一印象?ギャップはありましたか? 入社前は平均年齢も高く事業会社になるのでもっとお堅い環境かと思っていました。入社してみるとフラットなコミュニケーションも豊富で、新しい取り組みや面白いって思えることには寛大でした。 自分よりも年齢や役職の高い方々は経験を活かしながらも遊び心を持ち探究心が高く、カジュアルさと大人の雰囲気をうまく兼ね備えている方が多い環境だなと思いました。 前職は在宅中心だったため出社と在宅のハイブリットの勤務は通勤も辛いし少し嫌かもと思っていましたが、「環境に馴染みやすいし、ハイブリットすごくいい」って気持ちです😳 現場の雰囲気はどんな感じですか? とにかく最初はわからないことが多いのでお互いに話しやすい人間関係を作らなければと思い入社して1週間経たないくらいの頃デスクで「ねるねるねるね」を食べてみました。みなさんと雑談が生まれ微笑ましく接してくださり、最近はチームのメンバーと仕事の話をしながら みかん を一緒に食べています。 「実はエンジニア業務以外にもこんなこともできるんです」と1on1や食事の場で話したところ「そういうのできる人あまりいないからうまく活かせないか相談してみる」と入社2週間の頃にお話いただき、現在フロントエンド業務にとどまらずプロダクトをよくしていくための業務拡大を模索中です! 出社でタイミングが合えばみんなでランチも行き、業務にとどまらずコミュニケーションとる機会も多めです。 ブログを書くことになってどう思いましたか? エンジニアのブログは技術を中心に扱うのですでに存在している内容だったり、たくさんの検証が必要だったり、そもそもお題を決めるの含め書くのはハードルが高い印象でした。今回は純粋に入社エントリだったのでKTCに興味を持っている方に良い情報を伝えられたらなと思いました。 11月入社の同期から他部署のメンバーへ質問 仕事していて一番楽しいと思う瞬間を教えてください。 簡単なことでも相談を受ける時です。まだ入社間もないのですが頼ってくれる部分や自分にもでできることもあるんだなと思うと嬉しいです。できることの幅を増やせるように他の方の尊敬できるところもっと吸収していこうって思っています。 T.F 自己紹介 プロジェクト推進G のT.Fです。 KINTO ONE中古車のバックエンド担当しています。 所属チームはどんな体制ですか? フロントエンド・バックエンド・BFF(backend for frontend)をそれぞれ社員と協力会社の方で担当しています。 KTCへ入社したときの第一印象?ギャップはありましたか? 入社してすぐに有給取得できるのに驚きました。引越しを考えているので助かります。 現場の雰囲気はどんな感じですか? 親切な方が多いです。質問や提案がしやすい雰囲気です。 ブログを書くことになってどう思いましたか? 入社前の読者だった立場から執筆側になり、不思議な感覚です。 11月入社の同期から他部署のメンバーへ質問 入社1ヶ月での業務内容を教えてください。 まだ入社したばかりなので簡単なことしかしていないです。 小さめの開発やコードレビュー、来年から本格始動する予定の案件の見積もりなどをしました。 ドメイン駆動設計やクリーンアーキテクチャなどの設計を取り入れようと水面下で動いてます。 A.N 自己紹介 共通サービス開発GのA.Nです。 KINTO IDの基盤となる会員プラットフォームのPdMを担当しています。 所属チームはどんな体制ですか? 6名体制です。(協力会社さん含む) KTCへ入社したときの第一印象?ギャップはありましたか? 入社初日から風邪をひいてしまい、ついに3日目にダウンしてしまったのですが、初月から傷病休暇を支給されていて助かりました。 現場の雰囲気はどんな感じですか? マネージャーの方針もあると思いますが、各メンバーの自由を尊重している雰囲気です。皆さんエキスパートなので、自律的に行動されていますね。 ブログを書くことになってどう思いましたか? 会社のパブリックリレーションズに少なからず影響があると思うと恐れ多いですね。 11月入社の同期から他部署のメンバーへ質問 KTCには趣味や業務外活動のSlackチャンネルがありますが、気になったのありますか? ちょうど今日教えていただいたのですが、毎朝出社したらただ「グッドモーニング!」とコメントするだけというチャンネルです。なぜこのようなチャンネルを作ったのか謎ですが、参加されている皆さんが楽しそうなので癒されます。 F.T 自己紹介 モバイルアプリ開発GのF.Tです。Android版のUnlimitedアプリを担当しています。 所属チームはどんな体制ですか? Androidチームは私含め5名体制で開発しています。 KTCへ入社したときの第一印象?ギャップはありましたか? 中途入社にもかかわらず、オリエンテーションがしっかりあったことに驚きました。 オフィス内でのチームの境目が(物理的にも心理的にも)少ないのがすごいと感じました。 (Android開発者での勉強会があったり、担当アプリ内でOS関係なくコミュニケーションがあったり) 現場の雰囲気はどんな感じですか? 黙々と作業できる時間が多いです。ただ、困った際はすぐに質問できる優しい雰囲気があります。 ブログを書くことになってどう思いましたか? 不安でいっぱいでした。 11月入社の同期から他部署のメンバーへ質問 入社して1ヶ月、この会社に入って良かったと思うことは? エンジニアとしてレベルの高い環境で仕事をできているのが素直に嬉しいです。 多趣味な方が多く、業務外での学びも多いです。 W.Song 自己紹介 データ分析GのデータエンジニアリングチームのW.Songです。主にデータの連携を担当しています。 所属チームはどんな体制ですか? チームリーダーとメンバー合わせて4人です。 KTCへ入社したときの第一印象?ギャップはありましたか? 会社に本棚があるのはすごいですね。人気な本がたくさんあり、皆さんの勉強意欲が高いと感じています。 実際はギャップよりも、自分の思い込みかもしれません。入社前にOfficeの写真を見たことがあり、特にジャンクションの写真がとてもおしゃれに見えたんです。フリーアドレスかと思い込んでいました。 現場の雰囲気はどんな感じですか? ゆっくり話せるかなと思っています。皆忙しい中、丁寧な説明をしてもらえました。本当にありがたいです。久しぶりにたくさんコミュニケーションを取れる環境だと感じます。 ブログを書くことになってどう思いましたか? 本当に素晴らしいアウトプット方法だと思います。自分のアピールだけでなく、同じ悩みや考えを持つ人たちとつながり、仲間も作れると感じます。 11月入社の同期から他部署のメンバーへ質問 KTCに入社したことで変わったことはありますか? 車に興味が深まっています。 週3回出社しているので、前より痩せたはずです。 絵文字😇 の印象すごく変わりました。 以前は「嬉しい、やったー、うまくいった」という意味だと思ってよく使っていましたが、「もうダメ、オワタ」と知って驚きました。 さいごに 入社直後の慌ただしい中みなさま感想を教えてくださりありがとうございました! KINTOテクノロジーズは新しいメンバーも日々増えています。 今後もいろんな部署に配属されたメンバーの入社エントリが増えていくと思いますので楽しみにしていただけたら幸いです。 そして、KINTOテクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら から
アバター
Hello! I am Tsun-Tsun, a member of the Labor Affairs and General Affairs team in the Human Resources group. We are working to improve office spaces while listening to the voices of our employees. Today, I will talk about what we improved in 2023. Nihonbashi Muromachi Office Placing Miniature Toyota Cars at the Reception Area KINTO Technologies mainly develops mobility services under the KINTO brand, including a vehicle subscription service. One day, an employee said, "We're a vehicle company, but there aren't a lot of vehicle-related elements around the office." I thought, "In that case, let's put some miniature cars." 🔻16F reception area These are Toyota models. They made the reception area lively, but sorry! They're not for sale. By the way, do you know what models they are? Upper row, from left: GR Supra, Sienta, Voxy, GR Yaris, Harrier, Land Cruiser Lower row, from left: Prius, Crown, bZ4X, Corolla Sport, RAV4 🔻7F reception area From the rear left: Harrier, Yaris, Corolla Cross, Passo, and Alphard From the rear left: Corolla, Aqua, GR Yaris, C-HR, Yaris, Roomy Reviewing Work Styles After COVID-19 Was Reclassified as a Class 5 Disease When COVID-19 was reclassified as a Class 5 disease in Japan, the company relaxed some of its restrictions on going to the office, and we used a hybrid work style that involved both working at home and working at the office. Because the restrictions were relaxed, more employees started going back to the offices. We got requests from employees like “I want private booths” and “we need bigger meeting rooms." So, we added more meeting rooms and made two areas for casual meetings. For the meeting rooms, we furnished two unused rooms with smoked glass and air conditioners and added a reservation system for them. ![](/assets/blog/authors/tsujimoto/6.jpg =500x) ![](/assets/blog/authors/tsujimoto/7.jpg =500x) These two rooms can be used now anytime. We've introduced two types of informal meeting spaces. The first is private booths. We use KOKUYO's Fore series. ![](/assets/blog/authors/tsujimoto/8.png =500x) ![](/assets/blog/authors/tsujimoto/9.png =500x) The area with the booths has windows, so to keep the booths from getting too hot in the summer, we chose a type that wasn't surrounded on all four sides. The second type of meeting area has desks. We use KOKUYO's Join series. ![](/assets/blog/authors/tsujimoto/10.png =500x) Bottom picture ![](/assets/blog/authors/tsujimoto/11.png =500x) The chairs have a round bottom and move like a balance ball. Jinbocho Office Greening Plan The Muromachi Office has plants in resting areas, the entrance, and other areas, but the Jimbocho Office has barely any greenery. Some employees commented that they were sad without any plants, so the Jimbocho Office started a greening plan to increase the number of houseplants. Below is a picture of the greening plan for the office and conference room. ![](/assets/blog/authors/tsujimoto/12.png =500x) ![](/assets/blog/authors/tsujimoto/13.png =500x) Continuing to Improve Office Environments Next year, we will work on more improvements to our office. We have plans to renovate the break room as part of our initiative to improve communication internally. When they are complete, we will announce it here on the Tech Blog too. I go around the office every day, and I can sense the company growing. The Office Changes with the Times I feel that more and more companies are adopting hybrid work styles after the COVID-19 pandemic. I predict that there will be a demand for offices that support hybrid work styles and to crate spaces that make people want to be there. In particular, since our company has employees of various ages and nationalities, we want to create an office environment that accommodates all kinds of people with different values. Lastly, since it is hard to find information on how other companies' facilities are, I wrote this with the hope that it may serve as a helpful reference for others. The KINTO Technologies Advent Calendar is still going on, so I hope you look forward to what’s in store tomorrow!
アバター