TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Introduction I am Kanaya, a member of the KINTO FACTORY project, a service that allows you to renovate and upgrade your car. In this article, I will introduce our efforts to improve Deploy Traceability to Multiple Environments Utilizing GitHub and JIRA. Last time, I also wrote an article related to Remote Mob Programming in the Payments Team. Background and Challenges I joined the KINTO FACTORY project from the latter half of the development process. I was assigned as the frontend team leader for the e-commerce site project, and during my time in charge, I noticed the following issues: GitHub Issues, JIRA, and Excel are used for task management, making progress difficult to manage Difficult to track which tasks are deployed in which environment Troublesome to generate release notes when deploying to a test environment ![Excel WBS and Gantt chart example](/assets/blog/authors/kanaya/traceability_excel_gantt.png =480x) Excel WBS and Gantt chart example First, managing progress was difficult. At the time I joined the project, there were three types of task management tools: GitHub Issues, JIRA, and Excel WBS and Gantt charts, all of which were in use. This lack of centralized control of necessary information made it difficult to manage schedules and tasks. Second, it was difficult to track which tasks are deployed in which environment. During development, there were two target environments for deployment (a development environment and a test environment), making it challenging to know which environment the task under development had already been deployed to. Lastly, Troublesome to generate release notes when deploying to a test environment. Since the test environment was used for testing not only by us engineers, but also by the QA team responsible for quality assurance, we needed to communicate when and which content was deployed to it. We used to create release notes as a communication method, but writing them each time took about 5 minutes and was quite stressful. Our goal was to improve deployment traceability to address these issues. At least issue 2 and 3 (environment-specific deployment management issues, release note generation issues) are expected to be resolved. In addition, we aim to resolve issue 1 (difficulty in managing progress) by changing the way of work, as described later. Policy to Enhance Deployment Traceability First of all, traceability is described in DevOps technology: Version Control | DevOps Capabilities as follows. Among these, it is required that differences between multiple environments are either avoided or quickly identified once they occur. Note that version control of all dependencies can be managed in package.json, package-lock.json of npm for the frontend, so I'll skip that here. No matter which environment is chosen, it is essential to quickly and accurately determine the versions of all dependencies used to create the environment. Additionally, the two versions of the environment should be compared to understand the changes between them. As a policy to improve traceability to manage which tasks are deployed to which environments, we did the following: Manage all tasks and deployments with JIRA Rely on automatic generation of release notes Manage all tasks and deployments with JIRA JIRA has a feature to view development information for an issue . Since we know the status of code, reviews, builds, and deployments, we decided to consolidate all development information into JIRA. To integrate JIRA and GitHub, the following steps are required: Set up for JIRA and GitHub integration Include the JIRA ticket number in the branch name to connect the JIRA ticket with the GitHub pull request Set up the environment during deployment with GitHub Actions The second step was the part left to the work of each engineer. In asking each engineer to include the JIRA ticket number, we have decided to eliminate the use of GitHub Issues and Excel, and unify the use of JIRA. By unifying to JIRA, each engineer can manage tasks more easily, and those who manage progress can also use JIRA's roadmaps for centralized management. JIRA roadmap example For the third step, by passing environment parameter to deploy, the deployment status passed to environment will also be linked to JIRA. For reference, here is some of the deployment code by GitHub Actions we are using. In the environment parameter, $${ inputs.env }} is further passed, so that a key for each environment is created. Since $${ inputs.env } contains the environment name of the deployment destination, the deployment destination will be integrated with JIRA. DeployToECS: needs: [createTagName, ecr-image-check] if: ${{ needs.createTagName.outputs.TAG_NAME != '' && needs.ecr-image-check.outputs.output1 != '' }} runs-on: ubuntu-latest environment: ${{ inputs.env }}-factory-frontend steps: - Specific processing As a result, the development status is managed by JIRA roadmaps and tickets, and each ticket can be viewed to manage whether it is under review, merged but not deployed, and to what environment it has been deployed. Status listed on each JIRA ticket Visualizing the deployment status across all tickets, not just each ticket, is also possible. It is useful to see when each ticket was deployed and to which environment. Visualization of deployment status to each environment :::message GitHub also has a project function that can achieve this to some extent, but in light of the roadmap feature and integration with tools used by the QA team , we are unified with JIRA. ::: Rely on automatic generation of release notes For automatic generation of release notes, we decided to use GitHub's automatically generated release notes feature. The automatic generation of release notes is a feature that lists the titles and links of pull requests for the release note portion of GitHub's release feature . It can be better handled by setting a few rules. Here is an introduction. Define the category of release content The pull requests listed in the release notes are not categorized by default, making them difficult to view. Categorizing the pull requests helps keep release notes organized and easy to view. Categories are represented by labels. This time, I wanted to specifically display major changes and bug fixes as categories in the release notes, so I created 'enhancement' and 'bug' labels to represent each. You can also generate a list of pull request titles by category by creating a file .github/release.yml in the target repository and writing the following. changelog: categories: - title: Major Changes labels: - 'enhancement' - title: Bug Fixes labels: - 'bug' - title: Others labels: - '*' An image of the generated release notes is shown below. Pull requests labeled 'enhancement' and 'bug' are now classified as 'Major Changes' and 'Bug Fixes,' respectively All pull requests without 'enhancement' and 'bug' labels are classified as 'others.' Category sorting and title correction at the time of pull request review It is possible to generate release notes and then manually sort them, but once they are generated, it is difficult to remember and sort them. Therefore, at the time of the pull request review, we assign labels that correspond to the categories. We also check the titles to ensure they are appropriate for the content correction. To avoid forgetting to apply labels, others labels are given to refactoring, etc. This ensures that we know the review and category sorting are complete. Results Through the above efforts, we were able to successfully resolve the issues we were facing. In particular, the JIRA roadmaps have been referenced by other teams and are now used throughout the KINTO FACTORY project. Previously, GitHub Issues, JIRA, and Excel were used for task management, making progress difficult to manage. Now, centralized and managed in JIRA tickets and roadmaps. Previously, it was difficult to track which tasks are deployed in which environment. Now, deployment status of each environment is now visible in tickets. Previously, creating release notes when deploying to a test environment was troublesome. Now, work that used to take 2-3 minutes has drastically decreased to 10 seconds. Future Development By deploying to production environment, JIRA can measure two of the DevOps Four Keys in terms of speed. Our team will collaborate to identify the current status and target metrics for deployment frequency and change lead time for continuous improvement. Deployment frequency to production environment Lead time from merge to deploy to production environment The KINTO FACTORY project is looking for team members who will work together to achieve service growth. If you are interested in this article or KINTO FACTORY, check out the job listings below! [KINTO FACTORY Full Stack Engineer] KINTO FACTORY Development Project Team, Tokyo [KINTO FACTORY Backend Engineer] KINTO FACTORY Development Project Team, Tokyo [KINTO FACTORY Frontend Engineer] KINTO FACTORY Development Project Team, Tokyo
アバター
Introduction Hello, I am Keyuno and I am part of the KINTO FACTORY front end development team. As part of our KINTO FACTORY service, we are launching a dedicated magazine using Strapi , a headless content management system (CMS). *More details will be shared in an upcoming article, so please stay tuned! :::message What is Strapi? A Headless CMS with high front-end scalability Low implementation costs with default APIs for content retrieval As an open-source software (OSS), APIs can be added and expanded as needed. ::: In this article, I would like to explain how to add custom APIs to Strapi, which we implemented when introducing Strapi. This article covers the following two patterns of custom API implementation. :::message Custom API implementation patterns and use cases Implementing a new custom API We want to retrieve and return entries from multiple collectionType (content definitions) We want it to return the results of business logics that cannot be fully covered by the default API. Overriding the default API We aim to modify entry retrieval by replacing the auto-assigned postId with a custom UID. ::: Optimizing web page management is a constant challenge. I hope this article helps ease the burden for engineers, even if just a bit. Development Environment Details Strapi version : Strapi 4 node version : v20.11.0 Implementing a new custom API This section shows how to implement a new custom API. While this approach offers high flexibility because it can be implemented at the SQL level, overdoing it can make maintenance difficult, so use it wisely. 1. Create a router First, add the routes for the API endpoints you create. Under src/api , there is a directory for each collectionType. In the figure below, the routes directory is under post . Create a file under routes for defining custom-route. *According to the official documentation, there is a command npx strapi generat that prepares the necessary files (though I haven’t used it). In the created file, write the following code: export default { routes: [ { method: "GET", // Refers to the HTTP method. Please modify as needed to suit your purposes. path: "/posts/customapi/:value", // These are the endpoints for the APIs you will implement. handler: "post.customapi", // Specify the controller that this route refers to. } }; method Specify the HTTP method. Please modify as needed to suit the API you are creating. path Specify the endpoint for the custom API you are implementing. The sample endpoint, /:value indicates that the trailing value is received as the value variable. For example, if /posts/customapi/1 and /posts/customapi/2 are called, the value will store 1 and 2 respectively. handler Specify the controller (explained later) that the custom API you are implementing refers to. Specify the name of the function in the controller that you want to reference. 2. Implement the controller Implement the controller referenced by the routes implemented in step 1. Open the post.ts file located in the controllers directory , which is in the same level as the routes directory. In this file, add the handler ( customapi ) specified in the previous routes to the default controller (CoreController) as follows: Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); After change import { factories } from "@strapi/strapi"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { await this.validateQuery(ctx); const entity = await strapi.service("api::post.post").customapi(ctx); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); } catch (err) { ctx.body = err; } }, })); What’s changed Added a custom handler customapi() to the default controller Retrieved the result of executing the customapi () service that contains the business logic customapi() , as referenced in line 8. :::message In this section, the business logic is moved to the service layer, but it is also possible to implement the business logic in the controller (choose the layer based on reusability and readability). ::: For details on validateQuery(), sanitizeOutput(), transformResponse() , please refer to Strapi’s official documentation . 3. Implement the service Implement the service referenced by the controller implemented in step 2. Open the post.ts in the services directory , which is at the same level as the controllers directory. Add the method (customapi) specified in the previous controller to the default service (CoreService) as shown below. Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreService('api::post.post'); After change import { factories } from "@strapi/strapi"; export default factories.createCoreService("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { const queryParameter: { storeCode: string[]; userName: string } = ctx.query; const { parameterValue } = ctx.params; const sql = "/** Database to use, SQL according to purpose */"; const [allEntries] = await strapi.db.connection.raw(sql); return allEntries; } catch (err) { return err; } }, })); What’s changed Add the custom service customapi() to the default service Line 6: Retrieve the query parameter information Line 7: Obtain endpoint parameter information Line 10: Get the SQL execution results :::message You can use strapi.db.connection.raw(sql) to execute SQL directly , but strapi also provides other ways to obtain data. For other methods of obtaining data, please refer to the Official Documentation . ::: 4. Confirm operation With this, the implementation of the new custom API is complete. Please actually try calling the API and check that it works as expected. Overriding the default API In this section, I will show an example of how to override the default entry detail retrieval API to allow fetching with a custom parameter. [Entry detail retrieval API] [Before override] GET /{collectionType}/:postId(number) [After override] GET /{collectionType}/:contentId(string) 1. Create a router It is basically the same as when implementing a new custom API. Add the following code to the custom.ts under the routes directory: export default { routes: [ { method: "GET", path: "/posts/:contentId", handler: "post.findOne", } }; With this route addition, the endpoint that previously retrieved entry details using /posts/:postId(number) will now retrieve entry details using /posts/:contentId(string) (entry details can no longer be retrieved using /posts/:postId(number) ). 2. Implement the controller The implementation of the controller is basically the same as when implementing a new custom API. Modify the post.ts in the controllers directory, which is at the same level as the routes directory, as follows: Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); After change import { factories } from "@strapi/strapi"; import getPopulateQueryValue from "../../utils/getPopulateQueryValue"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async findOne(ctx) { await this.validateQuery(ctx); const { contentId } = ctx.params; const { populate } = ctx.query; const entity = await strapi.query("api::post.post").findOne({ where: { contentID: contentId }, ...(populate && { populate: getPopulateQueryValue(populate), }), }); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); }, })); What’s changed Added a custom findOne() controller to the default controller In line 12, it extracts records where the contentID column matches contentId . Since .findOne() is used in line 11, the result will be a single object. :::message Lines 13-15 follow the process for applying the populate parameter provided by the default API. If you want to fetch videos or images from mediaLibrary, you must add populate , so please be aware. ::: In this section, the business logic is implemented in the controller rather than the service. 3. Confirm operation With this, the implementation to override the default API is complete. Please actually try calling the API and check that it works as expected. Conclusion This concludes the explanation of implementing custom API in Strapi. I think Strapi is a highly customizable and great tool. Therefore, I hope to continue sharing my knowledge, and I would be happy if you could share your insights as well. We also have other topics, such as: Automatically building applications when publishing Strapi articles. Embedding videos (e.g., mp4) in CKEditor. I will cover these topics in future articles. Thank you for reading.
アバター
Overview Hello, we're Mori, Maya S, and Flo from the Operations Enhancement Team at the Global Development Group. The Global Development Group organized an in-house Hackathon-style event called the "KINTO Global Innovation Days" which took place over six days from December 14th to 21st. During the first four days from December 14th to 19th, three seminars were held, followed by two days dedicated to actual development. This was the first time that such an event was held within KINTO Technologies. This article is the first in a series of articles on the event, sharing the journey leading up to it. How it started KINTO Technologies currently consists of about 300 members and has roughly doubled in size in about two years. Among them, the Global Development Group is also currently a large group of 60 members. As an organization, we are subdivided into teams of 5 to 10 members, each performing their tasks but communication across teams has always been a challenge. Even within the Global Development Group, it's common for people to struggle with matching faces to names. In addition, although we had planned and organized internal study sessions to improve communication and skills, they inevitably turned out to be a one-way knowledge sharing. We were looking for an opportunity for engineers to learn through hands-on activities. In July, several of our group members participated in a Hackathon at Toyota Motor North America (TMNA) , which made us think that hosting such an event within our group could address the above issues. So, we decided to start planning and proposing this event at the end of August. Objectives and Timing While hackathon events have a variety of benefits in general, our primary objective this time was to stimulate cross-team communication. We believe that by not leaning too much on the business side, a certain degree of freedom in thinking was gained. We also set a goal of holding the event by the end of 2022 at the latest. The reason was that a major project involving the entire group was set to be completed by November, making it difficult to anticipate tasks beyond the fourth quarter. Research and Content Review Since this was our first time organizing an event, we first researched hackathon cases around the world to consider what an actual event should be like. Maya S was in charge of this research. As various role models were studied, mainly from other companies' tech blogs and hackathon event sites, a pattern began to become apparent. By picking up the elements of the pattern and combining them with aspects that fit our organization and goals, we were able to put together the contents for our Innovation Days. Many examples of findings could be presented, but I will explain three of them below. Finding 1: Benefits As we prepared for the event, we felt the need to communicate the benefits of participating to the participants, stakeholders, and everyone involved. For example, the benefits to the organization include opportunities for gaining ideas for intellectual property, increasing member engagement, and discovering new strengths. As for the benefits to individuals, we emphasized that they can learn in various aspects by coming up with ideas that cannot be tackled in their daily work and by interacting with work processes and members they would not usually encounter. Findings 2: Content ideas Based on the above benefits, the seminars were incorporated as content. We learned that hackathons typically include talks by guest speakers, lectures, and workshops aligned with the event's theme and goals. For Innovation Days, we prepared a workshop on upstream processes which is not usually experienced, a communication workshop, and a workshop on the Toyota Way, given that it was a "Hackathon held by KINTO Technologies." Many people would think of novelty items when it comes to events hosted by IT companies. This time, we distributed stickers, hoodies and clear files to the participants and support members. We also borrowed ideas from various events, like setting up criteria and rules for judging final pitches and deliverables, allocating time for coding, icebreakers, and prizes. Note: After the event name was decided, the UIUX team in the Global Group designed the logo. Thanks to them, we ended up with fantastic novelty items. Appreciate it a lot!!!! Findings 3: Theme setting The last point we want to address is theme setting. Noting that many hackathons have narrowly focused themes and objectives set by organizers, and some even have sponsors for various themes, in our event, the managers decided a "Challenge Theme" and took on the role of "Challenge Owner" to sponsor and explain each theme to the participants. This approach allowed the manager to provide support and encouragement to the participants. Reference: Council Post: Four Tips For Running A Successful Hackathon Urban Mobility Hackathon Find & Organize Hackathons Worldwide - Mobile, Web & IoT Hackathon Guide Theme Review For the content of the themes, four managers (the Group Manager and three Assistant Managers) who will actually evaluate on the day of the event selected four themes. Theme 1-2 Theme 3-4 Encouraging members Since this was the first attempt within the company, it took about three months from the time from the start of planning to recruiting members, through research, content review, and theme selection. At the beginning of November, after finalizing the theme, we held a project briefing for all Global Development Group members, and began recruiting on November 8th. The official event name, "KINTO Global Innovation Days," was decided. There was a proposal to make participation mandatory for all participants, but we chose to respect autonomy and allowed volunteers to opt-in instead. Slack was used for recruiting. 🔻🔻 At the briefing, we received words of encouragement from our managers and told our participants that we had the support from our CEO and CIO. However, recruiting participants was initially challenging, so we focused on highlighting the benefits directly to the team members Flo was responsible for this. We decided to communicate the benefits when talking in person in the office and through DMs. This allows us to ask members who are unable to participate why and make improvements. First, we explained the experience and skills they would gain by participating in the event. We emphasized the opportunities to try programming languages they don't normally use, propose new tools, and suggest improvements that haven't been prioritized. We also appealed to a sense of ownership and investment, as proposals made during the event could be used to improve processes in Global Group (Theme 3), be commercialized as a new service (Theme 1, 2), or be considered for participation in other hackathon events. Among all, our top priority was creating a supportive environment. Although ideas are evaluated and rewarded, the competition is friendly. We also encouraged people who had never participated in such an event, felt they couldn't contribute because they weren't engineers, or thought they'd be of no use to participate because it's an event where they could experience things they normally would not. There are also things that we noticed in conversations. Since the event was held before Christmas, several people were planning to take consecutive holidays or return to their home countries. For this reason, we decided to move the event up a few days. We adjusted the schedule with the instructors of each workshop, and finally set the pre-event for December 14th to 19th, with Innovation Days on December 20th and 21st. This added at least two to three more team members who could participate. As a side note, since there were only three operating members plus one support member, it was convenient for us to have a weekend in the middle of the event. Hosting the event all week long would have been physically demanding. Grouping and Pre-work Thanks to the recruiting efforts, we gathered 30 participants. More than half of the Global Development Group participated, as the group manager, assistant managers, and we operational members were not eligible to participate. Participants came from various teams such as Business Development, PdM (Product Management), UIUX, Frontend, Backend, Testing, and DevOps. We allocated each team leader based on two conditions: 1) involving people who are not usually involved in the work, and 2) ensuring team leaders were separated to maintain a balance of power. We ended up with 5 people in each of the 6 teams. (The members were perfectly divided because we had a total of 30 participants😊) The team members were announced on November 18th, and were then given two weeks to review and submit the following information: Team name Theme of choice Team leader As we are the team that is the most used to interacting cross-functionally in the group, we had concerns about whether the participants would be able to communicate well with each other, or engage actively in the event. However, our worries were unnecessary. As they were participating voluntarily, each team was more proactive than expected, creating their own Slack channels and holding meetings, which gave us hope for future events! 🎉 Review of Preparation Since we started this project with completely no experience, either within the company or from previous jobs, we had to conduct extensive research and seek advice from various people during the preparation. In particular, the approval process took a long time, but involving the CIO and the president was one of our achievements, and a major factor that we believe will lead to future events. In addition, we were able to successfully distribute tasks by combining the strengths of each Operations Enhancement Team member, such as idea generation (including research), planning and reporting, and understanding the situation and inspiring team members, which enabled us to implement the project in a short period of about four months from conception. There were various challenges during the pre-event period and on the day of the event, which will be described in the next article. Conclusion By the way, the planning of KUDOS and this event emerged from our daily conversations within the Operations Enhancement Team. We place a high value on conversations and take pride in our ability to go from casual conversation— like suggesting solutions and sharing experiences —to planning, execution, and results.
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies, Mobile App Development Group. As a team leader of the iOS team, I have previously published articles on team building, which you may find interesting. Please feel free to check them out: Revitalizing Retrospectives Through Professional Facilitators 180-degree feedback: Highly recommended! Recently, I participated in [Probably the world’s fastest event: the "Agile Teams goal-setting Guidebook" ABD Reading Session] My three main objectives for attending this event were as follows: I wanted to experience Active Book Dialogue® (referred to as "ABD" from now on). I was interested in the book featured in the event, "Agile Teams goal-setting Guidebook" . I wanted to meet the author, Ikuo Odanaka. Among these, experiencing ABD for the first time was particularly valuable. I found this reading method incredibly insightful and would like to introduce ABD to more people through this article. Important Notice All individuals and materials mentioned in this article have been approved to be published by the event’s organizers and the respective individuals. About the event This event took place on Wednesday, July 10, 2024, and was held as an "ABD reading session with the author before the publication" of the "Agile Teams goal-setting Guidebook". The event was so popular that the 15 available slots were filled within the same day the registration page was open. I feel incredibly fortunate to have been able to participate. I’m especially grateful to Kin-chan from our Corporate IT Group, who introduced me to this event! About the book I won’t go into too much detail about the book’s content, as I encourage you to read it yourself. However, I’d like to share some insights Ikuo-san introduced during the opening. It seems that goal setting isn’t particularly favored in today’s society. However, if everyone sincerely engages with their goals and strives to achieve them, the world will become a better place. Therefore, creating good goals is extremely important. That said, while setting goals is crucial, finding ways to achieve them is even more important. This book dedicates roughly the first 20% to the process of goal setting, with the remainder focused on how to achieve those goals, incorporating elements of Agile methodology. Although the book doesn’t cover performance evaluations, which are often discussed alongside goal settings, it does include columns written by eight contributors. These columns nicely complement the content, so I highly recommend reading them! Ikuo-san's opening scene About Ikuo-san Although I have never met Ikuo-san before, I was familiar with him through the following LT sessions and articles: ”Keeper of the seven keys four keys and three more ” ”10 reasons Why it’s easy to work with an engineering manager like this!” ”To fulfill the pride of being a "manager." “5 essential books that supported the Ideal EM, Ikuo Odanaka” I found his insights on development productivity, engineering management, and his approach to reading, to be incredibly valuable. I’ve always wanted to meet him and have a conversation. Unfortunately, although I manage to exchange a brief greeting with him during the event, I didn’t have the chance to have a proper conversation. While this was disappointing, I hope there will be another opportunity in the future. About ABD The following is a quote from the official ABD website . What is ABD? Explanation by the developer, Sotaro Takenouchi: ABD is an entirely new reading method that allows both people who are not fond of reading and those who love books to read the books they want in a short period of time. Through the process of dividing the book, summarizing it, presenting and sharing the summaries, and engaging in discussions, participants can deeply understand what the author is trying to convey, leading to active insights and learning. Additionally, by combining the active reading experience of each participants through group reading and discussion, the learning deepens further, and there is potential for new relationships to be fostered. I sincerely hope that through ABD, everyone can take better steps in their reading, driven by their intrinsic motivation. The process Co-summarize Participants bring their own books or divide one book into sections. Each person reads their assigned section and creates a summary. Relay presentation Each participants presents their summary in a relay format. Dialogue Participants pose questions, discuss their impressions and thoughts, deepening their understanding. The appeal of ABD Short reading time ABD allows you to read a book in a short amount of time while gaining a deep understanding of the author's intensions and content. It’s perfect for those who tend to accumulate unread books. Summaries remain After an Active Book Dialogue® session, the summaries remain, making it easy to review and share the key points with others who haven’t read the book. High retention rate Since participants are mindful of presenting when they input and summarize information, followed by immediate output and discussion, the content sticks in memory more effectively. Deep insights and emergence Engaging in dialogue with diverse people, each bringing their own questions and impressions, leads to profound learning and the emergence of new ideas. Multifaceted personal growth ABD helps participants develop focus, summarization, presentation, communication, and dialogue skills, which are all crucial for leadership in today’s world. Creation of a common language When the same team members participate, they share the same level of knowledge, creating a common language. Community building With just one book, you can create a space for dialogue and connect with others, making it ideal for casual community building. Most importantly, it’s fun! The immediate sharing of the excitement and learning gained from reading enriches the experience and, most importantly, makes it enjoyable. Personally, I find the value in 1. Short reading time, 6. Creation of a common language, 7. Community building, and 8. Most importantly, it’s fun! to be exceptionally high. On the day The book was divided into 15 sections. This was the first time I had seen such a sight! lol The book was divided into sections Co-summarize (20 minutes) Each participants read their part and create a summary. We were given 20 minutes to read and summarize the book onto three A4 sheets, which was quite challenging. I was so pressed for time that I forgot to take any pictures. Relay Presentation (1 minute 30 seconds per person x 15 people) Each participant posted their summaries on the wall. The Summaries everyone prepared Then, each person presented their summary in 1 minute and 30 seconds. Everyone’s summaries and presentations were outstanding. This is the photo of me presenting. I was so nervous, and the time was so short that I can’t remember what I said at all! My presentation Dialogue (25 minutes) In this part, we picked three sections from the presentations, and divided into groups to discuss them further. I joined the group focused on "Becoming a team that can help each other." Group discussion Within the group were Scrum Masters and Engineering Managers, and we exchanged various opinions. One particularly memorable discussion was about how we should build teams where people can challenge themselves with what they love, whether it’s their forte (specialty) or something they struggle with (growth opportunity). What I learned from the book through ABD Up until now, I had never used "OKR" (Objectives and Key Results) as a method for goal management, but my understanding of OKR has deepened through this experience. I also learned how crucial it is for a team to set goals driven by intrinsic motivation when creating goals. What stood out to me was the importance of setting goals through discussions within the team, rather than using a top-down approach. Additionally, I was struck by the idea that what truly matters is the “achievement of goals,” not just the “completion of tasks.” The notion that “sometimes, you need the courage to abandon lower priority tasks” was a new perspective for me. Moreover, the breakdown of reasons why we might feel like we don’t have enough time to achieve our goals, such as genuinely not having enough time, being unsure if the time investment is worthwhile, or lacking the motivation, was something I had never considered before. While the idea of “genuinely not having enough time” is easy to grasp, the concepts of “not being sure if it’s worth the time" and "lacking motivation" were new to me, though they resonated with my own experience. The book also offered solutions to these challenges, so I would like to read the book and review it. Thoughts It was my first time experiencing ABD, and I found it both stimulating and very enjoyable. Since all the participants on the day were genuinely interested in book we discussed, the presentations and dialogues were highly constructive, and I learned a lot. I’m considering trying ABD at our company as well, by gathering team members who are interested. However, I also felt that the operational difficulty could be quite high for the following reasons: Facilitators need strong skills because the session must proceed within a limited time. Co-summarizing is challenging, which might lead to differences in the quality of summaries and presentations depending on the participants. Selecting the right book and gathering team members could be difficult. I’ve participated in book study groups several times before, but I found that they often pose challenges like the burden of continuity over a long period and the individual workload (depending on the format of the book study group). In contrast, ABD offers a great alternative by wrapping up the session in a short time, which helps to overcome those drawbacks. However, the trade-off might be a lower understanding of the book due to the shorter session time. I think it’s important to carefully select the book and have prior discussions with participants to determine the most suitable reading method.
アバター
はじめに こんにちは、6月入社のahomuでございます。 本記事では2024年6月と7月入社に入社された皆さまに入社後の感想などをテキストで回答いただきました。 KINTOテクノロジーズに興味をもってくださった皆さま、そして、今回参加くださった各位がいつか見返したとき有益なコンテンツになればと存じます! hosoya ![観葉植物の写真](/assets/blog/authors/ahomu/20241007/hosoya.jpg =300x) 自己紹介 hosoyaです。所属はIT/IS部です。社内情シスのヘルプデスク窓口対応をしています。 所属チームの体制は? チームは私も含めて5人です。私の所属チームの他に役割別に複数のチームがあり、問い合わせ内容によって他チームと連携を取って業務を行っています。 KTCへ入社したときの第一印象?ギャップはあった? 情シス内で役割毎にチームが別れ、しっかり連携が取れているところが印象的でした。1人か2人だけの情シスばかりにいたので、すごくしっかりしているなと感じました。 現場の雰囲気はどんな感じ? 静かで自分の業務に集中できる環境です。しかし周りに話しかけづらいという事はなく、業務の事でも雑談でも話しかけるとすぐに盛り上がるので、明るい雰囲気です。 ブログを書くことになってどう思った? 業務で関わりがないとなかなか他の方が普段どのような事をされているのか知る機会がないかと思いますので、このブログがそういった機会になればと思います。 他の方からの質問:1日の業務スケジュールを教えてください 回答:朝9:00に出社して夕方18:00の退社までヘルプデスクの問合せ対応をしている感じです。朝と夕方にチームの情報共有の打合せを行なっています。問合せの内容にもよりますが毎日定型の業務を行っている感じです。 my ![青い海と空、白い雲の写真](/assets/blog/authors/ahomu/20241007/my.jpg =300x) 自己紹介 データ分析部に所属しているmyです。現在、データサイエンティストとして活動しています。これまで、データサイエンティストや機械学習エンジニアとして、データに関わるさまざまな業務に携わってきました。 所属チームの体制は? マネージャーを含めて4名で構成されています。 KTCへ入社したときの第一印象?ギャップはあった? 良いギャップとして、オンボーディングが整っており、社内ドキュメントがしっかりしている点や、Slack上の活発なコミュニケーションが印象的でした。 現場の雰囲気はどんな感じ? 穏やかな雰囲気の中で、技術に関するディスカッションがしやすい環境です。 ブログを書くことになってどう思った? 情報を発信する機会をいただき、嬉しいです。 他の方からの質問:在宅勤務をする中で買ったら凄く良かったという物を教えてください! 回答:ハーマンミラーの椅子です。長時間でも快適に座ることができ、とても満足しています。 yi ![植木鉢から伸びる2本のサボテンの写真](/assets/blog/authors/ahomu/20241007/yi.jpg =300x) 自己紹介 プラットフォーム開発部QAGに所属しているyiです。QAをやっております。 所属チームの体制は? チームは10人で、現在は大きくフロントエンド、バックオフィス、アプリの3つのグループに分かれてそれぞれのプロジェクトに対応しています。 KTCへ入社したときの第一印象?ギャップはあった? 新しい会社だけれど、社内の仕組みはちゃんとしているという印象を受けました。入社前は、色々なことがもうちょっとカオスな状態なのかもと考えていましたが、思ったより落ち着いた感じでした。 現場の雰囲気はどんな感じ? 各々がお忙しい状況でも、チームやプロジェクトの方には質問すれば答えていただけますし、全体的に穏やかな雰囲気なので馴染みやすい環境だと思います。 ブログを書くことになってどう思った? こういったブログを書いた経験がないので、何を書いたらいいのか戸惑ったというのが正直なところです。 他の方からの質問:チームの雰囲気はどうでしょうか? 最近感じたチームの良い点があれば教えてください 回答:先ほども書きましたが全体的には穏やかな雰囲気で、KTCのQAとしては各々担当するプロジェクトのテストをパートナーさんと進めている感じです。複数プロジェクトを担当している方も多く、それぞれお忙しいですが、新人に限らず、お互い質問しあったりする雰囲気があるのは良いと思います。 ahomu ![斧をもった海鳥のイラスト](/assets/blog/authors/ahomu/ahomu.png =300x) 自己紹介 ahomuです。IT/IS部に所属しています。職務経歴としてはWebフロントエンドの開発経験が長めですが、現在は組織横軸のいろいろをやっています。 所属チームの体制は? じつは、詳細は入社してから考えようという話で入社しており、本稿を書いている時点で部付のソロ活動(社内フリーランス)です (。•̀ᴗ-)✧ KTCへ入社したときの第一印象?ギャップはあった? カジュアル面談や選考プロセスのなかで、現所属の部長や副社長が事業が置かれている状況や、組織の雰囲気について明け透けに話してくださっていたのでギャップらしいものは感じていません。強いて言うなら、大企業の傘下にあることからこれまでの経験(メガベンチャーやスタートアップ)と比べて社内統制が良い意味でカッチリしていて新鮮です。 現場の雰囲気はどんな感じ? ソロ活動とは言いつつ、さまざまな部署のマネージャーやメンバーの方とお話をさせていただく機会があります。各々の立場で事業を担っていらっしゃる責任を感じると同時に、突然話しかけてきた新参者にも快くお話いただけるので助かっています。 ブログを書くことになってどう思った? あ、そういえばこれは本当に意外でしたが、テックブログの寄稿が社内的にすごく活発で、特に誰彼が追い立てたりしなくてもコンスタントに情報発信できていて伸び代を感じます。 他の方からの質問:名古屋と東京の会社で違うと思う文化、雰囲気等あったら教えてください。 回答:名古屋は人数が20人程度とコンパクトなのと、それぞれで幅広に活躍している方が多く個性的?な気がします。KINTO事業とも距離感が割と近く、親会社とのやり取りをする人も多いかもしれません。最近は名古屋オフィス内の飲み会が不定期開催されるようになりました🍻 つづら ![海外都市の川と両岸の町並みをとらえた夕景の写真](/assets/blog/authors/ahomu/20241007/tsuzura.jpg =300x) 自己紹介 マーケティング企画部/編成グループ所属のデザイナーです! 所属チームの体制は? ディレクター9名、デザイナー4名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 部署やチームが細分化されているので、社員同士の交流はあまりないのかな?っていうのが第一印象でしたが、実際には他の部署のデザイナーさんともランチやプライベートの飲み会に行ったりして交流を持てていて情報共有などできてとても助かってます。 現場の雰囲気はどんな感じ? うちのチームでいうと、それぞれのプロジェクト毎に動いてるので全員とかかわりが濃いわけではないですが社内で会ったときは雑談したりでも仕事はしっかりしてメリハリがついてる印象です。 ブログを書くことになってどう思った? どきどきわくわく。 他の方からの質問:所属オフィス近くのおいしいランチを教えてください! 回答:室町オフィス所属なんですが、「 でですけ サイゴンキッチン 」さんがおすすめ!私はいつもハーフ & ハーフで、フォーとカレーを注文するんですがそれぞれ4種ずつくらい味のバリエーションがありどれも美味しいのでおすすめです。 上原 直希 ![目をつむったネコの横顔の写真](/assets/blog/authors/ahomu/20241007/uehara.png =300x) 自己紹介 上原と申します。プロジェクト推進部 KINTO FACTORY開発グループ所属です。バックエンドエンジニアとして働いています。前職では老舗ISPにてニュースメディアの開発をやっていました。好きなプログラミング言語はRust、好きなエディタはNeoVimです。 所属チームの体制は? バックエンドエンジニアでいくと6名で開発を行なっています。他にフロントエンジニアも含めると20人くらいになります。 KTCへ入社したときの第一印象?ギャップはあった? あまりオンボーディングも用意されず、いきなり現場に投入されるのかなと思いきや、意外とオンボーディングや1on1などが充実しており、おかげさまでスムーズに業務に入ることができました。社内のあらゆるところに新しいことに取り組もうという雰囲気があって、自分も良い刺激を受けています。 現場の雰囲気はどんな感じ? 和気藹々とした雰囲気かなと思います。自分はわからない部分があるとすぐ気になってしまうタイプなのですが、メンバの方には質問に対し嫌な顔せず答えていただけるので大変ありがたいです。これまでより開発に集中する時間が増え、エンジニアとしてプロダクトに向き合えるのは良い環境だなと思います。 ブログを書くことになってどう思った? 実は入社前にKINTOテクノロジーズのTech Blogの ある記事 に助けられた経験があり、今度は自分が記事を書く側になり大変光栄です。個人的にSlackやブログ等で見える形でアウトプットをしようという意識をしており、これからTech Blogで役立つ情報をどんどん発信できればと思っています。 他の方からの質問:行って一番よかったと思う旅行先を教えてください!よければ理由も一緒に~! 回答:新婚旅行で行った伊勢志摩ですかねー!名鉄が販売している切符「まわりゃんせ」が便利すぎて最高です。東京に住んでると手に入れにくいのですが、じゃらんで特急券なしのまわりゃんせを買うのがおすすめです。 梁 晉榮 ![カレーとポテトフライと翠(ジンソーダ)の缶の写真](/assets/blog/authors/ahomu/20241007/jin.jpg =300x) 自己紹介 台湾出身の梁晉榮です。所属はモバイル開発グループ、主にAndroidアプリの開発をやっています。 所属チームの体制は? 担当するプロダクトの開発チームでは、私を含めてAndroidエンジニアが6名います。 KTCへ入社したときの第一印象?ギャップはあった? 所属チームが活気を溢れていて、Androidエンジニアも多数在籍しており、勉強会などを通じて技術面の交流を幅広くやったことが、自分に対してとても良い刺激になりました。 現場の雰囲気はどんな感じ? 開発時期によっては忙しくなることが多く、結構スピード感のある開発チームだと感じました。それでも良いプロダクトを作りたいっていう気持ちがチーム全員にあるので、細かなコミュニケーションを惜しまずやっています。 ブログを書くことになってどう思った? 入社エントリーを書くのが初めてで、入社ばかりの時の気持ちを振り返って、これからKTCでどうやっていくのかを考えることができました。 他の方からの質問:最近気になっているスマホアプリあったりしますか? 回答:PayPayアプリですね。サービスの開始から何年も使っていきましたが、新しい機能が追加されている中で、どうやってアプリの品質を維持しながら開発していくのか、スーパーアプリとしての仕組みに凄く興味があります。 Dara Lim ![屋内に展示された車の写真](/assets/blog/authors/ahomu/20241007/daralim.jpg =300x) Toyota FJ25 Land Cruiser - Toyota Dealership in Bogota, Colombia 自己紹介 My name is Dara Lim. I belong to the KINTO Global Development Group in the Business Development Department. My title is Business Development Manager, but the work I do relates closely to working as a business analyst. In my previous job, I worked as a financial analyst and business analyst in the insurance industry. 所属チームの体制は? There are 3 members on my team and we work closely with the engineering team to develop software solutions for the global full service lease businesses. KTCへ入社したときの第一印象?ギャップはあった? I really appreciate the orientation/onboarding process and the 1-on-1 meetings. They helped me to smoothly transition into work. My team was also very supportive. 現場の雰囲気はどんな感じ? I really enjoy the Jimbocho office space and its surroundings. My team sits close to each other so we are able to have discussions readily. ブログを書くことになってどう思った? Actually, before I joined the company, I was helped by many articles on KINTO Technologies' Tech Blog, so I’m glad to write my initial experience on joining the company. 他の方からの質問:What is the best thing you have noticed since joining KTC? 回答:I have had the experience of traveling to Latin America to visit KINTO businesses in Peru, Brazil, and Colombia. These were very valuable experiences for me to understand the car leasing business, its profitability and best of all, to meet others fellow KINTO members. I think this is the best thing I’ve experienced since joining KTC. 谷 郁弥 ![ふくふくとしたネコのイラスト](/assets/blog/authors/ahomu/20241007/tani.jpg =300x) 自己紹介 KINTO ONE開発部 新車サブスク開発G、Osaka Tech Lab所属の谷です。フロントエンドエンジニアをやっています。制作系からサービス開発系まで幅広くフロントエンド開発をこなしてきました。 所属チームの体制は? チームは4人体制です。販売店や社内向けのツール群を少人数で開発しています。 KTCへ入社したときの第一印象?ギャップはあった? 入社前は、大企業とスタートアップの雰囲気が混ざったカオスな環境で、まだまだ業務環境の整備が行き届いていないだろうと勝手に想像していたのですが、蓋を開けてみると、オンボーディングが濃密で、業務量も調整しやすく、フルフレックスで柔軟に働け、残業代もきちんと反映され、福利厚生が手厚く、優しく親切な方が多い、といった感じで良いギャップだらけでした。 現場の雰囲気はどんな感じ? 分からないことは積極的に質問ができる心理的安全性が高い環境だと思います。勉強会への参加が推奨されているのも魅力的で、併せて、自分のチームの場合、技術選定の自由度が高く、リアーキテクトやリファクタも推奨されている為、総じてスキルアップしやすい環境だと感じています。 ブログを書くことになってどう思った? KINTOテクノロジーズのことが少しでも解像度高く伝わればと思い、目一杯キーボードを叩こうと思いました。 他の方からの質問:あなたが持っているお気に入りのものは何ですか?またその理由は何ですか? 回答:SONYのノイズキャンセリングヘッドホン(WH-1000XM5)です!これのおかげで、音に対して敏感な体質の自分でも、すぐにゾーンに入ることが出来る為、非常に重宝しています。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが登場するのでお楽しみに〜。 KINTO テクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは 採用情報 をご覧ください https://www.kinto-technologies.com/recruit/
アバター
はじめに こんにちは、4月入社のウエヤマです。 本記事では2024年4月に入社された皆さまの入社後の感想などをまとめました。 KINTOテクノロジーズに興味をもってくださった皆さま、そして、今回参加くださった各位がいつか見返したとき有益なコンテンツになればと存じます🌸 マツノ ![ゴルフ](/assets/blog/authors/K.ueyama/Newcomers/golf.jpg =250x) 自己紹介 皆さんはじめまして!2024年4月入社のマツノです! 所属はプラットフォーム開発部プラットフォームGのMSPチームになります。前職ではAWS上に構築されたシステムの保守・運用を担当していました。 所属チームの体制は? 自分が所属しているMSPチームは4人体制です。他チームより引き継いだ定型業務をメインに担当しています。 KTCへ入社したときの第一印象?ギャップはあった? 同期の方含めて、優秀そうな方が多いなぁという印象を受けました。あとはフランクな方が多いのは良い意味でギャップがありましたね。 現場の雰囲気はどんな感じ? 基本的に質問や相談はいつでもしやすい雰囲気です。あとは作業をする時は黙々と作業に集中して、雑談する時は和気あいあいといったメリハリのある感じですね。 ブログを書くことになってどう思った? 元々テックブログのことは知っていて、興味もあったのでちょうどいい機会だなと思いました! m ![海](/assets/blog/authors/K.ueyama/Newcomers/sea.jpg =250x) 自己紹介 クリエイティブ室所属のmです。前職ではSESのIT企業でUI/UXデザイナーをしていました。 所属チームの体制は? ディレクター・デザイナーの10名体制です。 KTCへ入社したときの第一印象?ギャップはあった? オフィスがとても綺麗で、無料のドリンクサーバーもあり快適だと思いました。 現場の雰囲気はどんな感じ? 年齢層は30代〜40代が多く、知識と経験が豊富な方々ばかりです。 オフィスは割と賑やかなことが多いです。 ブログを書くことになってどう思った? 自分の考えや知識を発信できる場があるのはいいことだなと思います! ラセル ![城](/assets/blog/authors/K.ueyama/Newcomers/castle.png =250x) 自己紹介 バングラデシュから来た2024年4月入社のラセルです。プラットフォーム開発部モバイルアプリ開発GのPrismチームのiOS担当してます。 所属チームの体制は? チームは、エンジニア、デザイナー、POを含めて約14名です。 KTCへ入社したときの第一印象?ギャップはあった? モビリティサービスに興味があります。KTCの、トヨタのモビリティサービスをリードするという使命には大変感銘を受けました。ギャップを感じた点はとくにございません。 現場の雰囲気はどんな感じ? 人々は親切で助けてくれます。最新の技術を使うことに障壁はありません。技術的な問題についても話しやすいです。 ブログを書くことになってどう思った? このコンテキストでブログを書くのは初めてですが、これは本当にクールで楽しいアイデアだと思います。 ウエヤマ ![パスタ](/assets/blog/authors/K.ueyama/Newcomers/pasta.jpg =250x) 自己紹介 業務システムGのウエヤマです。前職ではSIerでシステム開発をしていました。 所属チームの体制は? エンジニア7名です。 KTCへ入社したときの第一印象?ギャップはあった? 面談、面接時に同じチームメンバーの方と話ができていたので、ギャップはあまり感じてません。 現場の雰囲気はどんな感じ? 皆さんホント優しくて話かけやすい環境です。 ブログを書くことになってどう思った? 自己紹介記事をGitHub管理してプルリクを出す形式が驚きでした。 R ![猫と魚](/assets/blog/authors/K.ueyama/Newcomers/catfish.jpg =250x) 自己紹介 プラットフォーム開発部共通サービス開発Gの会員PFに所属のRです。フロントエンド6対バックエンド4くらいで開発をしております。 所属チームの体制は? PdM1名とエンジニア4名です。 KTCへ入社したときの第一印象?ギャップはあった? 複数のプロジェクトや会社内外のイベント等をいくつも掛け持ちしておられる優秀な方々を間近で見て、とても自由だなという印象を受けました。 入社前にKTC主催の勉強会に参加し、若手中心ではありますがKTCの雰囲気を一部事前に知っていたこともあり、ギャップを感じた点はとくにございません。 現場の雰囲気はどんな感じ? バックエンドは黙々と、フロントエンドは実装中の画面について意見・感想等を話して盛り上がることも時々あります。 ブログを書くことになってどう思った? 読む分には何も身構えることはありませんでしたが、いざ自分が書くとなると何を伝えるべきか分からず困ってしまいました。言語化能力や発信力を鍛えないといけませんね。 kasai ![ひよこのイラスト](/assets/blog/authors/K.ueyama/Newcomers/chickicon.png =250x) 自己紹介 プラットフォーム開発部プラットフォームグループSREチームのkasaiです。前職でもSREやってました。 所属チームの体制は? グループとしてはたくさんいますが、SREチームは2人体制です!チームについては後日ブログが公開されるのでそちらをお楽しみに! KTCへ入社したときの第一印象?ギャップはあった? 面接や面談等からしっかりとお話しさせていただき、認識合わせを行っていたため、ギャップは感じませんでした! 現場の雰囲気はどんな感じ? 和気藹々としています!!!!! ブログを書くことになってどう思った? ついに・・・この時が・・・来たっ!!!!!! https://blog.kinto-technologies.com/posts/2022-12-03-ktc_club_introduction/ さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが登場するのでお楽しみに🍻 KINTO テクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは 採用情報 をご覧ください https://www.kinto-technologies.com/recruit/
アバター
はじめに こんにちは。モバイル開発のヒロヤ (@___TRAsh) です。 弊社ではいくつもの内製開発プロダクトがありますが、多くのプロダクトでXcode Cloudを採用しています。 Xcode CloudはAppleが公式に提供しているCI/CDサービスで、iOSアプリのビルドやCD(TestFlightへのデプロイ)を自動化できます。 今回は、Xcode Cloudでプライベートリポジトリをライブラリとして取り込む方法についてあまり参考資料が無く、ビルドを通すのに苦労したので、調査した結果をこちらにまとめておきます。 対象 今回の内容はiOS環境のCI/CDの話になるので、ある程度iOS開発の知識がある方を対象としています。 環境 - Xcode 15.4 - SwiftPMでライブラリを管理してる - 参照しているライブラリにプライベートリポジトリがある - GitHub Actions + FastlaneでTestFlightへデプロイしてる やりたいこと TestFlightのデプロイをGitHub Actions + FastlaneからXcode Cloudに移行したいと考えています。 これをすることで、Fastlaneへの依存をなくすことができ、App申請までのフローに必要なツールを減らすことができます。 また、申請に必要な証明書の管理もXcode Cloud上で直接Apple Developerの証明書を参照してくれるので、証明書の管理も楽になります。 悩み ここまで利点しかないXcode Cloudですが、ライブラリとしてプライベートリポジトリを参照する際には、ユーザー認証が必要になります。Xcode Cloudではそういった認証設定が考慮されていないため、プライベートリポジトリをライブラリとして参照するには一工夫する必要があります。 そこでXcode Cloudから提供されている ci_scripts/ci_post_clone.sh を活用して認証の設定することで、プライベートリポジトリを参照できるようになります。 .netrcの設定 Xcodeには12.5の頃から .netrc を参照する機能が追加されています。 .netrc はユーザー名とパスワードを記述したファイルで、 ~/.netrc に配置することで、 git clone 時に認証情報を自動で入力してくれます。 また、今回はライブラリをプライベートリポジトリのGitHub Releaseで管理する方法をとっているので、 api.github.com も追加しています。 touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc ユーザー名とアクセストークンをXcode Cloudの環境変数にSecretで設定しておき、 ci_post_clone.sh で参照するようにしています。 追加のリポジトリにURL追加 App Store Connect内のXcode Cloud の設定にある 追加のリポジトリ にライブラリのリポジトリURLを追加します。 defaults delete で設定を削除する 上記の設定でプライベートリポジトリのライブラリを取得できる様になってもまだライブラリの依存関係を解決できず、以下の様なエラーに遭遇しました。 :::message alert Could not resolve package dependencies: a resolved file is required when automatic dependency resolution is disabled and should be placed at XX/XX/Package.resolved. Running resolver because the following dependencies were added: 'XXXX' ( https://github.com/~~/~~.git ) fatalError ::: このエラーはXcode Cloud上でSwiftPMが、Package.resolvedを参照せず、自動でパッケージのバージョンを解決しようとしていることが原因です。 それぞれ、Xcodeのdefaultsを削除するとうまくビルドが通る様になります。 defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution この2つの設定なんですが実は違いが明確にわからなかったです... ローカルで xcodebuild のhelpコマンドを叩いて見てみると、似た様な設定があるのでそちらを参考にしても $ xcodebuild -help ... -disableAutomaticPackageResolution prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file -onlyUsePackageVersionsFromResolvedFile prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file Package.resolvedファイルに記録されているバージョン以外に、パッケージが自動的に解決されるのを防ぎます。 となっているので全く同じ内容しか書かれてないんですよね... 一応SwiftPMのIssueでも同じ様な内容の質問があって、この対応で解決しているので、現状はこれで問題ないと思います。 https://github.com/swiftlang/swift-package-manager/issues/6914 ひとまず、この2つの設定を削除することで、SwiftPMが参照するライブラリの依存関係をPackage.resolvedのみ参照し、依存関係を解決してくれる様になります。 結論 Xcode Cloudで起動前に参照される ci_scripts/ci_post_clone.sh に .netrc を設定することで、プライベートリポジトリを参照できるようになり、ライブラリの依存関係を解決するために、 defaults delete を設定することで、Xcode Cloud上でのビルドが通るようになりました。 #!/bin/sh defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc 最後に Fastlaneは古くからある偉大なツールですが、Xcode Cloudの利用により、App申請までのフローをシンプルにできました。 先にも述べましたが、Xcode Cloudを使うことによるメリットは多いので、ぜひ導入を検討してみてください。 Appendix https://developer.apple.com/documentation/xcode/writing-custom-build-scripts https://speakerdeck.com/ryunen344/swiftpm-with-kmmwoprivatenagithub-releasedeyun-yong-suru https://qiita.com/tichise/items/87ff3f7c02d33d8c7370 https://github.com/swiftlang/swift-package-manager/issues/6914
アバター
はじめに こんにちは。モバイル開発のヒロヤ (@___TRAsh) です。 弊社ではいくつもの内製開発プロダクトがありますが、多くのプロダクトでXcode Cloudを採用しています。 Xcode CloudはAppleが公式に提供しているCI/CDサービスで、iOSアプリのビルドやCD(TestFlightへのデプロイ)を自動化できます。 今回は、Xcode Cloudでプライベートリポジトリをライブラリとして取り込む方法についてあまり参考資料が無く、ビルドを通すのに苦労したので、調査した結果をこちらにまとめておきます。 対象 今回の内容はiOS環境のCI/CDの話になるので、ある程度iOS開発の知識がある方を対象としています。 環境 - Xcode 15.4 - SwiftPMでライブラリを管理してる - 参照しているライブラリにプライベートリポジトリがある - GitHub Actions + FastlaneでTestFlightへデプロイしてる やりたいこと TestFlightのデプロイをGitHub Actions + FastlaneからXcode Cloudに移行したいと考えています。 これをすることで、Fastlaneへの依存をなくすことができ、App申請までのフローに必要なツールを減らすことができます。 また、申請に必要な証明書の管理もXcode Cloud上で直接Apple Developerの証明書を参照してくれるので、証明書の管理も楽になります。 悩み ここまで利点しかないXcode Cloudですが、ライブラリとしてプライベートリポジトリを参照する際には、ユーザー認証が必要になります。Xcode Cloudではそういった認証設定が考慮されていないため、プライベートリポジトリをライブラリとして参照するには一工夫する必要があります。 そこでXcode Cloudから提供されている ci_scripts/ci_post_clone.sh を活用して認証の設定することで、プライベートリポジトリを参照できるようになります。 .netrcの設定 Xcodeには12.5の頃から .netrc を参照する機能が追加されています。 .netrc はユーザー名とパスワードを記述したファイルで、 ~/.netrc に配置することで、 git clone 時に認証情報を自動で入力してくれます。 また、今回はライブラリをプライベートリポジトリのGitHub Releaseで管理する方法をとっているので、 api.github.com も追加しています。 touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc ユーザー名とアクセストークンをXcode Cloudの環境変数にSecretで設定しておき、 ci_post_clone.sh で参照するようにしています。 追加のリポジトリにURL追加 App Store Connect内のXcode Cloud の設定にある 追加のリポジトリ にライブラリのリポジトリURLを追加します。 defaults delete で設定を削除する 上記の設定でプライベートリポジトリのライブラリを取得できる様になってもまだライブラリの依存関係を解決できず、以下の様なエラーに遭遇しました。 :::message alert Could not resolve package dependencies: a resolved file is required when automatic dependency resolution is disabled and should be placed at XX/XX/Package.resolved. Running resolver because the following dependencies were added: 'XXXX' ( https://github.com/~~/~~.git ) fatalError ::: このエラーはXcode Cloud上でSwiftPMが、Package.resolvedを参照せず、自動でパッケージのバージョンを解決しようとしていることが原因です。 それぞれ、Xcodeのdefaultsを削除するとうまくビルドが通る様になります。 defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution この2つの設定なんですが実は違いが明確にわからなかったです... ローカルで xcodebuild のhelpコマンドを叩いて見てみると、似た様な設定があるのでそちらを参考にしても $ xcodebuild -help ... -disableAutomaticPackageResolution prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file -onlyUsePackageVersionsFromResolvedFile prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file Package.resolvedファイルに記録されているバージョン以外に、パッケージが自動的に解決されるのを防ぎます。 となっているので全く同じ内容しか書かれてないんですよね... 一応SwiftPMのIssueでも同じ様な内容の質問があって、この対応で解決しているので、現状はこれで問題ないと思います。 https://github.com/swiftlang/swift-package-manager/issues/6914 ひとまず、この2つの設定を削除することで、SwiftPMが参照するライブラリの依存関係をPackage.resolvedのみ参照し、依存関係を解決してくれる様になります。 結論 Xcode Cloudで起動前に参照される ci_scripts/ci_post_clone.sh に .netrc を設定することで、プライベートリポジトリを参照できるようになり、ライブラリの依存関係を解決するために、 defaults delete を設定することで、Xcode Cloud上でのビルドが通るようになりました。 #!/bin/sh defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc 最後に Fastlaneは古くからある偉大なツールですが、Xcode Cloudの利用により、App申請までのフローをシンプルにできました。 先にも述べましたが、Xcode Cloudを使うことによるメリットは多いので、ぜひ導入を検討してみてください。 Appendix https://developer.apple.com/documentation/xcode/writing-custom-build-scripts https://speakerdeck.com/ryunen344/swiftpm-with-kmmwoprivatenagithub-releasedeyun-yong-suru https://qiita.com/tichise/items/87ff3f7c02d33d8c7370 https://github.com/swiftlang/swift-package-manager/issues/6914
アバター
はじめに こんにちは! KINTOテクノロジーズ プロジェクト推進GのRen.Mです。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は技術的な話ではなく、社内活動についてご紹介したいと思います! この記事の対象者 社内部活動に興味のある方 社員同士のコミュニケーション不足を感じている方 社内部活動とは 弊社には部活動のカルチャーがあり、社内にいくつもの部が存在しています!(ex.フットサル部、ゴルフ部など) 部活動ごとにSlackのパブリックチャンネルが存在し、参加は個人の自由で誰でも気軽に入部できます! 中にはいくつも掛け持ちしている人もいるようです! 私の所属しているバスケ部ではオフィス近隣の体育館を借りて夕方から3時間ほど練習会を行なっています。 体育館の利用は抽選にはなるのですが、基本的に毎月欠かさず活動しています! また、スムーズに活動するために、 毎月体育館を予約する人 利用料を支払いに行く人 部費を管理する人 などと有志で役割分担をしています。 予約が確定するとSlackでアナウンスをして参加者を募ります! 日によりますが参加人数は10人前後が多いです! 活動風景 部活動を通して感じたこと 気分転換できるようになった 弊社にはエンジニアが多く在籍しており、デスクワークしている社員がほとんどです。 また、自宅で業務を行なうこともあるためどうしても運動不足になりがちです。 そのため部活動を通して運動することで心も体もリフレッシュできます! ただ、どうしても白熱してしまうのでケガだけはしないように注意しながら練習しています! 他部署の社員と交流できるようになった 個人的に部活動の一番の強みはこの部分だと感じます。 部には様々な部署の社員が所属しているため、普段業務で関わらない社員とコミュニケーションを取ることができます。 ミーティングで初めて顔を合わせるより、部活動を通してあらかじめ交流していた方がその先の業務をスムーズに進めることができるかもしれません。 また、新入社員の方が会社に馴染むきっかけになればいいと思います。 おわりに いかがだったでしょうか。 社内部活動は社員同士の親睦を深めつつ、リフレッシュできる良いカルチャーだと思います! ぜひ弊社に入社した際は部活動を通して様々な社員と交流してみてください! また、テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
はじめに こんにちは! KINTOテクノロジーズ プロジェクト推進GのRen.Mです。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は技術的な話ではなく、社内活動についてご紹介したいと思います! この記事の対象者 社内部活動に興味のある方 社員同士のコミュニケーション不足を感じている方 社内部活動とは 弊社には部活動のカルチャーがあり、社内にいくつもの部が存在しています!(ex.フットサル部、ゴルフ部など) 部活動ごとにSlackのパブリックチャンネルが存在し、参加は個人の自由で誰でも気軽に入部できます! 中にはいくつも掛け持ちしている人もいるようです! 私の所属しているバスケ部ではオフィス近隣の体育館を借りて夕方から3時間ほど練習会を行なっています。 体育館の利用は抽選にはなるのですが、基本的に毎月欠かさず活動しています! また、スムーズに活動するために、 毎月体育館を予約する人 利用料を支払いに行く人 部費を管理する人 などと有志で役割分担をしています。 予約が確定するとSlackでアナウンスをして参加者を募ります! 日によりますが参加人数は10人前後が多いです! 活動風景 部活動を通して感じたこと 気分転換できるようになった 弊社にはエンジニアが多く在籍しており、デスクワークしている社員がほとんどです。 また、自宅で業務を行なうこともあるためどうしても運動不足になりがちです。 そのため部活動を通して運動することで心も体もリフレッシュできます! ただ、どうしても白熱してしまうのでケガだけはしないように注意しながら練習しています! 他部署の社員と交流できるようになった 個人的に部活動の一番の強みはこの部分だと感じます。 部には様々な部署の社員が所属しているため、普段業務で関わらない社員とコミュニケーションを取ることができます。 ミーティングで初めて顔を合わせるより、部活動を通してあらかじめ交流していた方がその先の業務をスムーズに進めることができるかもしれません。 また、新入社員の方が会社に馴染むきっかけになればいいと思います。 おわりに いかがだったでしょうか。 社内部活動は社員同士の親睦を深めつつ、リフレッシュできる良いカルチャーだと思います! ぜひ弊社に入社した際は部活動を通して様々な社員と交流してみてください! また、テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
ごあいさつ みなさまこんにちは。 モバイルアプリ開発グループの中口と申します。 みなさまはiOSDC Japan 2024はいかがでしたか?? 今年は8月開催ということもあり、例年以上の熱気でお祭りムードだったのではないでしょうか!! こちらの記事は、 iOSDCに参加された方 iOSエンジニアの方 カンファレンスが好きな方 に読んでいただけたら嬉しいです。 昨年までのiOSDCの参加状況として、弊社は希望者がそれぞれ自由に参加して、参加者のみ後日社内勉強会でLT形式の共有を行ったり、テックブログを執筆したりする程度でした。 しかし2024年のKINTOテクノロジーズは一味違います!! 今年は、 「スポンサーになった」「プロポーザルを何人か書いた(しかも1名採択された、すごい🎉!!)」「iOSDCの振り返りイベントを開催した」 と盛りだくさんで臨みました!! 最後の締めくくりとしてこちらのブログを執筆させていただきます!! スポンサーの話 KINTOテクノロジーズは今年初めてiOSDCのスポンサーをしました🙌!!! 弊社は、これまで社内の横断的なイベントを盛り上げてくれていたテックブログチームが技術広報グループとして生まれ変わり、対外的なイベントにもより一層力を入れております!!今回参戦したiOSDCだけでなく、DroidKaigi2024、Developers Summit KANSAI(デブサミ関西)もスポンサーをしております。このように、どんどん大型カンファレンスに顔を出していっております! その中で、今回のiOSDCではモバイルアプリ開発グループのiOSエンジニアが主体となって、上記の技術広報グループやクリエイティブ室など様々な方からサポートを受けつつ、全社を上げてスポンサーに臨みました。 詳しくは弊社のメンバーが、別記事でまとめてくれていたり、後述するiOSDCの振り返りイベントにて発表したりしているのでそちらをご覧ください!! 【テックブログ】はじめてのiOSDCスポンサー日記 こちらでは、ノベルティなどの制作物を中心にご紹介しております!!!ぜひご一読ください! https://blog.kinto-technologies.com/posts/2024-08-21-iOSDC2024-novelties/ 【テックブログ】KINTOテクノロジーズはiOSDC Japan 2024のゴールドスポンサーです&チャレンジトークンはこちら 🚙 弊社社員のインタビュー記事が載っていますのでこちらもぜひご一読ください! https://blog.kinto-technologies.com/posts/sponsored-iosdc-japan-2024/ 【登壇スライド】iOSDC初出展までにした事を共有したい こちらでは、iOSDCのスポンサー出展について時系列でどんなふうに進めていったかを紹介しています!!カンファレンスでスポンサーに興味がある方には、参考になる内容が多いと思いますのでご興味あればぜひご一読ください!!! https://speakerdeck.com/ktchiroyah/iosdcchu-chu-zhan-matenisitashi-wogong-you-sitai プロポーザルの話 今年は初めて会社として初めてプロポーザル執筆会を行ないました🙌!!! 登壇に興味があるメンバーが集まって、 こういったスライド を参考に、どうやって書けばいいか、どんな内容を書けばいいかなど、みんなでガヤガヤしながら下記のプロポーザルを出しました!! https://fortee.jp/iosdc-japan-2024/proposal/7fd624c8-06ec-4dc4-960a-da37f74cf90f https://fortee.jp/iosdc-japan-2024/proposal/a82414cd-54d7-4abb-aa20-e35feb717489 https://fortee.jp/iosdc-japan-2024/proposal/e9e13b6d-0b74-4437-8ec0-ba6598b70ad7 https://fortee.jp/iosdc-japan-2024/proposal/ab0eeedf-0d4f-47a6-8df8-bd792b4d70ca そして下記が採択されています!!ほんとにすごい🎉!! https://fortee.jp/iosdc-japan-2024/proposal/25af110e-61d0-4dc8-aba5-3e2e7d192868 https://fortee.jp/iosdc-japan-2024/proposal/c3901357-0782-4fb5-89b8-cb48c473f066 その後、他社さんの事例などを聞いているとプロポーザルをレビューする会などがあったり、そもそものプロポーザルの数が違ったりなどうちも負けていられないなぁと思いました。来年はもっと頑張りたいですね! iOSDCの振り返りイベントを開催した こういった大型イベントにはアフターイベントがつきものでして、昨年もいくつかの会社さんがiOSDC振り返りイベント開催しておりました。 \そして今年は、弊社も開催しました🙌!!!/ なぜ開催したのか、開催までの経緯、当日の様子など、けっこう熱くブログにまとめていますので、こちらもぜひご一読ください!!! https://blog.kinto-technologies.com/posts/2024-09-12-after-iosdc/ ここからは、当日iOSDCに参加したメンバーがどんなセッションを見たのかをまとめましたのでご覧ください。 KINTOテクノロジーズ的セッション視聴ランキング 15名(うち4名のパートナー様含む)の参加があったので、みなさんがどんなセッションを見たのか集計しました。そちらをランキング形式にしています!! 弊社が、いまどんな技術に興味があるのか、ということが良く分かると思います!! 同率2位(6名): Swift 6のTyped throwsとSwiftにおけるエラーハンドリングの全体像を学ぶ https://fortee.jp/iosdc-japan-2024/proposal/c48577a8-33f1-4169-96a0-9866adc8db8e Typed throwsとは何か、そもそもその前提となるUntyped throwsとは何かを対比して説明してくれてとてもわかりやすかったです。 一見するとTyped throwsいいじゃんっと感じる内容でしたが、安易に使うべきでないという公式の発言などにも触れていただいて良かったと思いました。また、発表者のkoherさんの見解も聞けて勉強になりました。 同率2位(6名): 座談会 「Strict ConcurrencyとSwift 6が開く新時代: 私たちはどう生きるか?」 https://fortee.jp/iosdc-japan-2024/proposal/5e7b95a8-9a2e-47d5-87a7-545c46c38b25 ちょうど弊社でもSwift 6に向けたStrict Concurrencyの調査を進めており、非常に参考になるセッションでした。こちらで発表されていた内容を参考に対応を進めていければと思います。 また、座談会という形式が斬新でみなさんがそれぞれフォローしあっていてとても素敵でした。こういったタイプの発表がもっと増えてほしいなと思いました。 同率2位(6名): 開発を加速する共有Swift Package実践 https://fortee.jp/iosdc-japan-2024/proposal/52d755e6-2ba3-4474-82eb-46d845b6772c 弊社も複数のアプリを開発しているため、共有Swift Packageという仕組みは非常に魅力的だなと感じた反面、それぞれアプリの性質が違っているため共通化できる部分もなかなかなさそうだなぁというジレンマがあります。一方で共有Swift Package化するステップ(チーム構成とか運用方法とか)はとても勉強になりました。 同率1位(7名): ルーキーズLT大会 https://fortee.jp/iosdc-japan-2024/proposal/95d397a6-f81d-4809-a062-048a447279b3 こちら弊社メンバーの登壇もあったため応援に駆けつけました!! ペンライトで応援するスタイル楽しいですね!! 内容も興味深いものが多く、 来年挑戦してみたい 、というメンバーもいました! 同率1位(7名): App Clipの魔法: iOSデザイン開発の新時代 https://fortee.jp/iosdc-japan-2024/proposal/66f33ab0-0d73-479a-855b-058e41e1379b 弊社では、まだApp Clipを導入しているアプリはないため、導入してみたいと思っていたメンバーが多かったです。一方でApp Clipのコードを配布するのはどうしたらいいか、などの課題も出てきそうとのことでした。 下記にそのほかで視聴数が多かったものを掲載いたします。 4人視聴 iOS/iPadOSの多様な「ViewController」の徹底解説と実装例 iOSアプリらしさを紐解く LT大会 後半戦 クロスプラットフォーム普及増加。SwiftでiOS開発はもうやらないのか....? 複雑さに立ち向かうためのソフトウェア開発入門 5人視聴 iPhoneへのマイナンバーカード搭載におけるデータ規格についての理解を深める GraphQLとスキーマファーストで切り開くライドシェアの未来 また、集計したところ今回の1人辺りの平均セッション視聴数は11.25でした!!! おまけ 今年は、弊社もスポンサーブースを出したので、みんながどんなブースが印象に残ったのか興味がありアンケートとってみました!! 9名から回答を得られまして、結果はこちらです。(一票以上投票があったブースのみ掲載しております)          印象に残ったブースを集計しました       こちらをご覧いただくと、かなり票数がばらけたことが見て取れるのではないでしょうか。(弊社が6票集めているのは皆さん気を遣ってくれたのだと思います!) そう考えると、万人に受けるブースを作るのは難しいことなんだなぁという事も感じます。 そんな中4票集めているディー・エヌ・エーさんはさすがです。 終わりに 冒頭にも述べた通り、今年のiOSDCは全社的にかなり気合を入れて取り組みました! スポンサー、プロポーザル、振り返りイベント、どれをとっても個人的にはとても大満足でした。ただし、まだまだ改善できることたくさんがございますので来年はもっとパワーアップしてiOSDCに参加できたらと思います!! また、各セッションも例年通り非常に勉強になるものが多く、その点も改めて参加して良かったなと思います。
アバター
Background Introduction Self Introduction Hello. My name is Li Lin from the DevOps Team of the KINTO Technologies Global Development Group. Until 2017, I worked in China as an engineer, project manager, and university lecturer. In 2018, I started working in Japan. I’m a working mother of two, balancing my job while actively reskilling. Meet our DevOps team The Global Development Group's DevOps team started its operations this year. Our team is international, and the DevOps Team members speak Japanese, Chinese, and English as their native languages. We make sure to communicate smoothly by considering each member’s language skills. As a new team, each member has different experiences, but we always cooperate actively when facing challenges. I believe our teamwork is going well. DevOps Team Responsibilities Currently, there are multiple teams within the Global Development Group. The DevOps Team acts as a common team overseeing the entire Global Development Group. Our specific responsibilities are as follows: Task Work Content Formulate Global team deployment standards for CI/CD and development environment (Git/AWS/Grafana/SonarQube, etc.). Establish deployment standards for common components across Global teams. Improve common DevOps practices within the Global teams. Collect feedback on these tasks, and implement PDCA. Provide customized support individually. For requests not listed above that are not applicable to all groups, we assess their urgency and necessity, and then consider and support the implementation measures. Generally, the DevOps Team provides support, while the Application Team handles implementation. Error Resolution Support DevOps helps resolve errors during CI/CD processes and environment usage. Improve DevOps and AWS knowledge within the group. Conduct study sessions and handle individual inquiries. Contact point with the Platform Group DevOps Team handles inquiries between the Global Development Group and the Platform Group, collects feedback, and establishes operational standards for the groups. Standardization of Operational tasks Establish standards for operational tasks. Some tasks are outsourced to external vendors. Cost monitoring and policy setting. Optimize environment cost. Inquiry correspondence. Accept the inquiries mentioned above. Target audience of this article This article is intended for experienced developers who are considering or have already implemented Flyway. When I first started using Flyway, I did some research online but found that there was very little information providing an overall picture. This article serves as a proposal for introducing Flyway. I would be honored if you find the information helpful. Introducing Flyway What is Flyway? Flyway is an Open-Source database migration tool. It makes it easy to version control databases across multiple environments. The applicable scenarios for each command are as follows: Baseline Running the Baseline command creates the initial version for Flyway. The default version of Baseline is "1". In the Community Edition, you can create a baseline only once. It cannot be updated. If some tables already exist in the target database, you must run Baseline. Otherwise, the Migrate command will result in an error. [Scenario] Step 1) Set the version of the already applied SQL scripts to a number smaller than "1" before introducing Flyway. Step 2) Execute the Baseline command 3) Execute the Migrate command. As a result, SQL scripts with a version number of "1" or higher will be applied. [Reference] Baselines an existing database Clean The Clean command completely clears the target schema. Since this makes the schema empty, you must implement measures to prevent it from being used in production environments. [Scenario] If you want to revert to the initial version, you can do so by following the steps below. Step 1) Run the Clean command Step 2) Run the Migrate command [Reference] Wiping your configured schemas completely clean Info Flyway information is displayed. This command allows you to verify if Flyway can connect to the database. [Scenario] After execution, the following information is displayed (example): | Category | Version | Description | Type | Installed On | State | +-----------+---------+-------------+------+--------------+---------+ | Versioned | 00.01 | initial | SQL | | Pending | | Versioned | 00.02 | initial | SQL | | Pending | +-----------+---------+-------------+------+--------------+---------+ [Reference] Prints the details and status information about all the migrations Migrate Applies new SQL files that have not yet been applied. This is the most commonly used command. It is used every time the database needs to be updated to a new version. [Reference] Migrates the schema to the latest version Repair Removes the execution history of the SQL scripts that resulted in errors. However, the execution results cannot be removed. The Repair command only removes the execution history of failed SQL scripts from the flyway_shema_history table (Flyway's version control table) in the database. The following situation is common: In such cases, carefully check which SQL scripts were applied and make sure all scripts are applied correctly. If a single SQL file contains multiple SQL scripts and an error occurs, the scripts before the error will be applied, while those after the error will not be. [Scenario] [Example] When you are applying V01_07, V01_08, and V01_09, if V01_07 and V01_08 succeed but V01_09 fails, you can take the following steps. Step 1) Fix V01_09 Step 2) Execute the Repair command Step 3) Run the Migrate command again [Reference] Repairs the schema history table Validate This command checks whether the SQL scripts in the project have been applied to the database and also checks if the versions match. You can also use it to verify that the current database matches the version in the cloud. [Reference] Validates the applied migrations against the available ones Background to Flyway's implementation If you don’t use a tool like Flyway, you will need to log in to a bastion server for the database and run update scripts every time you deploy. Most of the Global Development Group's services are composed of microservices. As the number of environments grew, the traditional method of updating databases via bastion servers became increasingly burdensome and risky, leading to operational challenges. These circumstances led us to consider introducing Flyway. Initially, we tried introducing a job that could execute commands in a GitHub job via Lambda on AWS. When we actually tried using it, we encountered the following issues: If you migrate to AWS without sufficiently verifying the SQL scripts in a local environment, the migration may fail, making recovery difficult. If you update the database manually without building a Flyway environment in your local environment, there is a high risk that the structure will differ from the database on AWS. With the above issues in mind, during the first PDCA cycle, we implemented the Flyway system as shown below. Flyway implementation method by KINTO Technologies Global Development Group To use Flyway in a Spring Boot application, we implemented the following functions: Flyway is integrated directly into the application Usage Timing: Migrations are executed automatically when the application is started locally and when it is deployed to AWS. Purpose: This allows SQL migration scripts to be tested locally, and automates the migration process reducing manual effort. Introducing the Flyway plugin Usage Timing: During local development. Purpose: To run Flyway commands using the plugin if automatic migration cannot be preformed locally. GitHub job implementation for Flyway commands Usage Timing: When automatic migration cannot be performed during deployment to AWS, Flyway commands are executed using a GitHub job. Purpose: To enable the execution of Flyway commands without logging into AWS Next, I will introduce the final configurations for each implementation. Integrating Flyway into the application By integrating Flyway into the project, you can achieve the following: Databases in each environment are automatically migrated after the application starts. Migration SQL scripts are validated in local environment before migrating to the AWS database. The details are as follows: By running the following command, you can start a MySQL Docker image locally. Once the application starts, the latest SQL scripts will be automatically migrated. docker-compose up -d ./gradlew bootRun Introducing the Flyway plugin You can also maintain the local database manually using Flyway commands. By using the plugin as shown below, you can execute these commands Introducing GitHub jobs that can execute Flyway commands Once deployed on AWS, the database can be automatically migrated to Aurora. However, if this does not occur, you will need to run the Flyway command manually. Flyway commands are executed via Lambda on AWS. The configuration diagram is as follows: The flow from executing GitHub job to completing Flyway execution is as follows: Upload the execution file from the GitHub job to S3. Extract the necessary parameters from the payload (JSON). Use AWS CLI to extract information required for Flyway execution. Retrieve the zip file containing SQL scripts from the S3 bucket. Execute Flyway (using a Docker image on Lambda). Place the results in the S3 bucket. The image below shows the process when executing the command on GitHub. We have built this system so that it can be run without logging into AWS. This setup allows the following for each environment: Databases in each environment are automatically migrated after the application starts. Migration SQL scripts are validated in the local environment before migrating to the AWS database. Tools for executing Flyway commands are provided in each environment. Using Flyway has brought the following benefits: Deployment time was significantly reduced (by more than half) Eliminating database discrepancies between environments reduced unnecessary bugs and misunderstandings during development. The workload required for managing database versions in each environment was minimized (as long as the version was clearly indicated by the SQL script name). Testing and reviewing can prevent incomplete queries from being executed. No need to log in to a jump server built on AWS to perform operations. Of course, when using Flyway, there are some precautions: If there are many developers, decide on a consistent method of use. Troubleshooting and recovery from errors can be time-consuming. Theoretically, the above mechanism also allows you to start up a database while GitHub Actions CI/CD jobs are running, but we have not yet verified this. I am also considering using Flyway to build a database for automated CI/CD testing. While there are many benefits to using Flyway, it has also caused some issues. I believe there is room for improvement by using the PDCA cycle of usage standards. By gradually introducing Flyway depending on the environment and usage scenario, it can be used more safely and efficiently. If you're interested, we encourage you to give it a try.
アバター
はじめに こんにちは。モバイル開発グループでiOSチームのチームリーダーをやっている中口と申します。 普段の業務では、 KINTOかんたん申し込みアプリ Prism Japan( スマホアプリ版 / 最近リリースされたばかりのWeb版 ) のiOS開発を担当しています。 早速本題ですが、2024年8月22日(木)-24(土)で開催されたiOSDC Japan 2024の振り返りイベントとして、2024年9月9日(月)に 【iOSDC JAPAN 2024 AFTER PARTY】 を開催しましたので、なぜ開催したのか、開催するまでに取り組んだこと、イベント当日の様子などを振り返りたいと思います。 特に、「なぜ開催したのか」の部分については、持論を展開しますが多くの方に共感いただければ嬉しいです。 こちらのブログは 本イベントに参加された方 iOSDCに参加された方 イベントによく参加する方、参加してみたい方 イベントの主催をしている方、主催をしてみたい方 などに読んでもらえたら嬉しいです。 また私自身、本イベントを開催したことでモチベーションが爆上がりしましたので、この思いを多くの方に共有したいと思い、テックブログとして執筆いたします。 なぜイベントを開催したのか こちらのイベントは、私の中で4月ごろから計画がありました。 では、なぜ計画したの?といわれたら、正直なところちゃんと言語化できていなかったと思います。 昨年の10月にチームリーダーという役割になって以降、iOSに関するイベントだけでなく、開発生産性、組織マネジメント、エンジニアリングマネージャーなどのイベントを中心にその他にも気になったイベントにはたくさん聞きに行くようにしました。 その中で、下記のような感情が湧いていることに気がつきました。 イベントに参加するとモチベーションがめっちゃ上がるなー イベントで登壇している人とか開催している人ってめっちゃカッコ良いなー なので、4月ごろの気持ちとしては、強いて言語化するのであれば「なんかカッコ良いし自分もイベントやってみたい!!」といったところでした。 ただ、お金や時間や人など多くの資源を投入して開催するイベントの目的が、「カッコ良いから」では説明がつきません。。。 その後、イベントをする意義について、自分の中で苦悩する日々が始まります。。。。 イベントを開催し終わった今でも、明確な答えには辿り着けていないと思います。 (こんな曖昧な状態でイベントを開催させていただいたことに感謝しかありません) 組織に属しながらイベントを開催する以上、やはり何かしら求められます。 よく言われることとしては「組織のプレゼンスを上げる」、「サービスを普及する」、「採用に繋げる」などでしょうか。これらは、すべて正しくイベントを開催する大きな意義だと思いますし、それらが結果として現れればそのイベントは大成功と言えると思います。 ただ、個人的にはちょっとしっくりきていない部分があります。IT業界におけるイベントでは、参加者の多くは「新たな知識を身につけたい」、「人脈を広げたい」、「イベントに参加すること自体が楽しい」などの自己成長等を目的としてイベントに参加される方が多いと思っており、主催者がどんな組織か知りたい、どんなサービスを出しているか知りたい、その会社に転職したい、などを理由にイベントに参加している方はごく稀だと考えます。 そういった中で、イベントをする意義について悩んだ末、自分なりの結論に達しました。 それは 「一人でも多くの方にモチベーションを伝染したい」 ということでした。 上記でも述べたとおり、イベントに参加すると「モチベーションがめっちゃ上がるなー」という感情になるのですが、これは私以外にも多くの方が実感するのでは無いかと思います。 明日からもっと仕事を頑張ろう、という人が一人でも増えれば、その積み重ねが世の中を良くすることにつながっていくと考えます。 また、モチベーションが上がることによって、私のようにイベントを開催してみたいとか、登壇してみたいという人が出てくるかもしれません。そして、それを見てまた別の人が主催や登壇をしてみたいと思うかもしれません。このように良いモチベーションはきっと伝染すると考えています!。 ということで、現段階ではイベントをする意義として、 「一人でも多くの方にモチベーションを伝染したい」 という思いをもって、本イベントを開催させていただきました(思いついた4月時点ではここまで整理できていませんでしたが)。 (そして、とはいえ組織的な観点で見ると、モチベーションが上がる、という理由だけでイベントをバンバンやろうよ、とはならないので苦悩の日々はまだまだ続きそうです。) 続きまして、本イベントの概要についてご紹介いたします。 イベントの概要 イベント名:iOSDC JAPAN 2024 AFTER PARTY 日時:2024/09/09(月) 19:00〜 参加者:20名前後 ウェルスナビさん、TimeTreeさん、弊社の3社にてiOSDCを振り返る会として合同開催をいたしました。 各社から1枠ずつの計3本のLT + 各社から1名ずつの計3名で実施するパネルディスカッションを行いました。 それでは、本イベントの開催に至るまでをご紹介いたします。 イベントを開催するまで 4月ごろにモバイル開発関連のイベントを開催してみたい、と思い立ったのですがどうやって開催すればいいのかなぁと悩みました。 弊社には、イベント運営をサポートいただける技術広報グループ(DevRel)がありまして、こちらにサポートをお願いすればイベントを円滑に運営することは問題ないだろう、と思っていました。 一方で、 集客 登壇者の募集 イベントのテーマ決め などは、技術広報グループのサポートがあっても難しい部分だろうな、と思ったので弊社1社でモバイル関連のイベントを実施することは、難しいと判断いたしました。 そんな中で、イベントの開催に非常に力を入れており、集客や登壇者募集のノウハウもたくさん持っていらっしゃるであろう、Findyさんにお力を借りたいと思ったため、 5月に開催されたこちらのイベント に伺ってきました。 その時の イベント参加レポート もブログにしていますので併せてご覧ください。 こちらのイベントをきっかけに、Findyさんの担当者と情報交換をさせていただくことができるようになりました。その後、どんなイベントを開催するか議論を重ねた結果、ウェルスナビさん、TimeTreeさんをご紹介いただき、iOSDCの振り返りイベントをやってみよう、という運びになりました。 イベント運営にさまざまな助言・ご協力をいただいたFindyさん、イベントを合同開催いただいたウェルスナビさん、TimeTreeさん、心より感謝申し上げます。 3社にてiOSDCの振り返りイベントを実施してみよう、となってからは、 イベントの座組はどうするか 登壇者やパネルディスカッションのパネラーはどうするか 開催日時 など、さまざまなことがスムーズに決定していきました。 Connpassによるイベント募集ページも無事に完成したところで、次は参加者の募集です。 今回は、イベント参加者とのコミュニケーションに重点を置きたい、という点も3社で共通していたことから、オフラインのみのイベントとしておりました。弊社のイベントスペースで開催としたのでキャパ的に30名くらいの募集を目標にしておりました。 2024/08/08(木)にConnpassページを開設して、数日の間に10名程度の参加登録があり、まずまずの参加数だなと思っていました。ただ、イベントPRの本番はiOSDCが開催される08/22(木)-24(土)の期間で、ここでどれだけ参加数を伸ばせられるかだと思っていました。今年は、弊社が初のスポンサーブースの 出展があるので、ここでしっかりとPRをしたり、弊社の公式XでもiOSDC期間中に複数回にわたってPRのポストをしたりと、かなりイベントの告知に力を入れました。 その結果、iOSDC期間中に増えた参加登録は、なんと 「0人」 でした、、、、 *正直、イベントの参加者募集をナメていました、、、* スポンサーブースでのイベントのPR方法は、振り返って考えてみると改善の必要がありそうだなと思いまいた。ただチラシを配るだけでなく、その場で登録いただくような動線(例えば、参加登録頂いた方にノベルティをお配りするなど)をもう少し検討しておくべきでした。 こちらは、次回以降の反省です。。。 実際に、Connpassにてイベントページの統計確認したところ、08/22(木)-24(土)の間に登録数が全くいない事や、ページビューも全く伸びていない事が見て取れるかと思います。 connpassにて確認した統計 その後は、9/9(月)までの期間、上記画像に示すようなペースで少しずつ参加者の登録をいただいたり、私自身が他社様のイベントに参加した際に、告知のお時間を頂くことができたりなどの効果で、当日時点で24名の参加登録を頂くことができました。 やはり、テーマとして、「iOSDCの振り返りイベント」としたことは一定の集客効果のあるテーマだったと感じました。 ということで、当初目標にしていた30名の参加登録には達成していませんでしたが、個人的には初主催のイベントで十分すぎる登録者数だと感じておりました。 あとは、当日を迎えるだけです。 イベント当日 こういったイベントにはさまざまな事情により当日キャンセルは付きものです。 実際に、本イベントも残念ながら当日キャンセルになってしまった方数名いらっしゃいました。 ただし、私自身は当日を迎えたこのタイミングで参加者の増減に一喜一憂している余裕はありませんでした。 合同開催いただいたウェルスナビさん、TimeTreeさん、および当日ご参加いただいた、参加者の皆様にとって参加して良かった、と思えるイベントにすることに集中しておりました。 ここからは、当日の様子を簡単に振り返っていきます。 緊張しながら皆様が到着されるのを待ちます。会場のセッティングが完了した様子です。 無事に会場のセッティング完了 19時になり、ウェルスナビさん、TimeTreeさん、参加者の皆さんが揃ったので、いよいよ1枠目のLTが始まります。 ウェルスナビ 牟田さんによる「Package.swiftから始めるDX」です。 牟田さんの発表 Swift Package Managerの基本から解説いただき、知っているようで知らないことなどもあり非常に勉強になりました。ウェルスナビさんにおける取り組みや、今後目指す姿などもご紹介いただき他社の取り組みが聞ける貴重な機会だなと思いました。また、今後控えているSwift6に向けての解説などもいただき勉強になりました。 続いて2枠目です。 TimeTree 坂口さんによる 「iOSDCのプロポーザルを形態素解析してトレンドの変遷を探ってみた」 です。 坂口さんの発表 こちらの発表は、タイトルを見た時点から非常に気になっていました。過去数回iOSDCには参加させていただいておりますが、やはりセッションには一定の流行があるような気がしておりまして、それがプロポーザルにもしっかりと反映されていて興味深かったです。また、この解析ツールをXcodeで自作されており、発表中にシミュレーターで実演されていたのも見ていて楽しかったです。 続いて3枠目です。 KINTOテクノロジーズ 日野森さんによる 「iOSDC初出展までにした事を共有したい」 です。 日野森さんによる発表 弊社がスポンサーブース初出展だったこともあり、準備期間における苦労話などを共有いただきました。私も一部の展示物で準備に携わったのですが、どんなコンテンツが来場者の方に受けるのか、どうすれば見やすいのか、など答えがない中で試行錯誤していくのはとても大変でした。 またスポンサーとして制作したものを、 こちらのテックブログでも 詳しく紹介されているのでぜひご覧ください。 次に、休憩や乾杯を挟んでパネルディスカッションを行いました。 パネラーは ウェルスナビ 長さん TimeTree masaichiさん キントテクノロジーズ 日野森さん の3名に登壇いただき、モデレーターとして、私が進行をいたしました。 パネルディスカッションメンバー こちらのテーマをあらかじめ用意しておき、iOSDCを振り返っていきました。 テーマに関しては、事前にパネラーの皆さん、どう言った内容が興味があるか、などをヒアリングしつつ決定させていただきましした。 パネルディスカッションのテーマ 時間の都合で全部のテーマをディスカッションできなかったのですが、話題を見つつその場の流れにあったテーマをピックアップしながら進めるように意識して進行いたしました。 各社のiOS開発の状況や、iOSDCに向けての取り組み、例年と比べての今年の変化などをお話しいただきました。 パネラーの皆様です 最後に参加者全員で集合写真を撮影いたしました。 集合写真 会が終わっての感想 冒頭でも述べた通り、本イベントは4月ごろより計画を立てて開催に至ることができました。無事に会が開催できるだろうか、参加者は集まるだろうか、当日の司会進行はうまくいくだろうか、などイベントが終了するまで常に不安を感じながら準備をしました。 その中で、合同開催のウェルスナビさん、TimeTreeさんのご協力があったり、弊社技術広報グループや当日に運営スタッフを引き受けていただいた方のご協力もあり、私個人としてはとても満足感の高い会を開催できたと感じております。もちろん当日ご参加いただいた皆様にも会を大いに盛り上げていただきました。 本イベントに関わっていただいた全ての皆様に心より感謝をお伝えしたいです。 ●良かったところ ウェルスナビさん、TimeTreeさん、Findyさんなどイベント開催にあたって他社様とのつながりが持てたことはとても貴重な事でした。 また、私自身初のイベント主催でしたが、無事完了できたことで自信が持てました。 ●今後改善していきたいところ 途中でも述べたように、集客面は非常に難しいなと感じました。現状はまだ良い打手が見つかっていないため、次回運営をする際は関係者含めてしっかり検討していきたいです。 また、弊社のiOSチームのメンバーにもっと本イベントに参加頂きたかったです。今回は、LTおよびパネラーとして弊社からはアシスタントマネージャーの日野森さんが登壇したのですが、日野森さんは普段から登壇やイベント参加の機会が多く、本イベントに関しては普段もっと登壇機会が少ないメンバーにチャレンジしてもらいたいという思いがありました。しかし、社内にて募集をしてみたところメンバーからの申し出が無かったため、日野森さんに登壇頂くことになりました。 私自身も社内の募集の段階で、もっと登壇のハードルを下げる工夫をしたり、登壇に向けた準備のサポート体制を整えたりなど、今後の大きな改善ポイントだなと感じております。 最後に 10月にはDoroidkaigi2024の振り返りイベントもウェルスナビさん、TimeTreeさんと3社で開催することが決まっていたり、今後も同様の座組で不定期にこのようなイベントを開催していきたいと考えております。 冒頭で「一人でも多くの方にモチベーションを伝染したい」と述べましたが、本イベントを通して一番モチベーションが上がったのは他でもない私自身だと感じています。 参加者の皆様の方でも、モチベーションが上がったよ、と感じてくれている方がいればこの会は大成功だったのではないかと思います。 今後もこのようなイベントの開催含め様々な活動を通して、関わる全ての人のモチベーションアップにつながる活動をしていきたいと考えています。
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies, Mobile App Development Group. I participated in the event TechBrew in Tokyo Facing Technical Debt in Mobile Apps , held on May 23, 2024. I would like to report on the event. The event day The venue was at the Findy where their office was newly renovated . I had heard the rumors, but seeing the spacious and beautiful event space in person was exciting 😀 True to the name “TechBrew,” there were plenty of alcohol and snacks available, and the atmosphere was very relaxed. However, since I had an LT (Lightning Talk) presentation later, I refrained from drinking alcohol until the presentation was over👍 1st LT "Steps to evolve Bitkey's mobile app" They shared the history of Bitkey's mobile app up to the present day. The app was originally built with React Native, but it was evolved through transitioning to native development, implementing SwiftUI, and then adopting TCA. However, they said that the SwiftUI implementation is still a work in progress and might have been a mistake. They faced challenges because the behavior of SwiftUI changes depending on different iOS versions, which was something I could relate to from my own experiences. During the LT, the comments that really stood out to me were, "Everything we thought was good was the right choice," and "The decisions we made at that time were probably the right ones." It made me realize how true it is. I also had the opportunity to chat with the presenter, Ara-san, during the social gathering after the LT. We talked about many things, including Swift on Windows, and I learned a lot of new information. It was a very enjoyable conversation. 2nd LT "Approaching technical debt in mobile apps as a whole company" They discussed what technical debt is and how to tackle it. One of the speakers highlighted the need to distinguish between: Debt we are aware of, but accept it to gain returns. Debt we are unaware of, or that became debt due to changes in the circumstances. They mentioned that the former is manageable, but the latter can become problematic if ignored for too long. To address technical debt, they stressed the importance of negotiating time to resolve it, even if it means pausing business tasks. They emphasized that technical debt is a shared problem, involving not just the development team but all stakeholders, which I also agree with. I feel that such negotiation skills are especially important for engineering managers and team leaders. They also mentioned that they use FourKeys to visualize the situation, but warned against focusing too much on numerical goals. I also feel the same that visualizing a team's development capability is challenging, and I am careful not to rely too much on frameworks like FourKeys. 3rd LT "How to deal with technical debt in Safie Viewer for iOS" The presentation covered the challenges and strategies in developing their app that has been around for 10 years. The app still uses many technologies from its initial release, and while there is a desire to re-architect, the current system is stable and capable of adding many new features. As a result, they were unable to justify the time-consuming refactoring, and were unable to take any action to eliminate the debt. Currently, they are addressing the issues by doing what they can, with the following two main policies. Take immediate actions if possible: Updating to the latest Xcode version as soon as it is released. There is code that cannot be written unless the version is upgraded, which leads to creating legacy code. Implementing Danger A steady approach Currently using MVC/MVP Asynchronous processing is closure-based Re-architecting from this state is risky. Test new features with modern technology. I thought it made sense that to actually get started, you need to draw up a specific schedule. I'm often hesitant about major refactorings, so I’ve learned the importance of setting a clear schedule and sticking to it. 4th LT "Ensuring safe mobile development with package management" Like LT3, this talk also focused on an app with a long history of 8 years. They discussed how they addressed technical debt by focusing on commonization and separation. A recent challenge they face is excessive commonization . For example, their Channel data has around 100 parameters (borrowing the speaker’s terms), and there are many situations where they end up with data that is not used every time. On the other hand, they warned that excessive separation of responsibilities can also be problematic. There were instances where functions were separated even though they were only called from one place, leading to an overdone state. The importance of "thoughtful commonization" and " thoughtful responsibility separation" left a strong impression on me, and I realized I might have separated things without much consideration. They also explained that it is a good idea to manage these issues using Package Manager and introduced some ideas and methods for doing so. 5th LT "Tackling technical debt with GitHub Copilot" This was my presentation. You can find the presentation content here . I discussed the use of GitHub Copilot in Xcode. Compared to VScode, which officially supports GitHub Copilot, Xcode still has many limitations, and its usage rate is not growing as much. However, I found that Xcode's Chat function can significantly help in addressing technical debt, so I focused on that in my presentation. During the presentation, I demonstrated the Chat function, and I felt that the attention of the entire audience became even more concentrated. I was very happy that everyone seemed to be listening with interest. This was my first time speaking at an event outside of our company, but everyone in the audience listened warmly, and I was able to complete my presentation without any problems. Conclusion After the LT sessions, there was a social gathering where I had the opportunity to exchange information with many attendees. It was a very stimulating experience, and I felt motivated to continue participating in and speaking at such external events in the future. I also had a chance to speak with Takahashi-san, the organizer of the event. We discussed how great it would be to hold a joint event between our Mobile App Development Group and Findy. I look forward to actively pursuing such collaborations. As a souvenir, I received a bottle of IPA brewed by Findy!
アバター
The first commemorative request This is HOKA from the Manabi-no-Michi-no-Eki (Learning Roadside Station) team. In February 2024, during our monthly company-wide meeting, we announced the launch of our "Manabi-no-Michi-no-Eki (Learning Roadside Station)" initiative. Following this announcement, Nakaguchi-san from the Mobile App Development Group’s iOS team reached out with a request: "I'm looking for some advice on how to organize my study sessions." Study Session Consultation for the Mobile App Development Group This was our first inquiry. We quickly organized a meeting with the four team leaders of the iOS team and three of us from the Learning Roadside Station team. The iOS team has been holding weekly study sessions since June 2023, aiming to enhance the team’s overall skills . In the first week of each month, they decide together on what topics they want to focus on, and then they spend the second to fourth weeks working on those topics. Facilitators also take turns. They mentioned that they have also conducted various activities like casual conversation, LT (lightning talks), and reading groups, and even presented about HIG (Human Interface Guidelines). As for myself, my impression was: "Everything seems so well organized" "What else do they have to worry?" I thought. But this is a common way KINTO Technologies employees are perceived. As a result of this consultation meeting, three members of our administrative office were invited to observe their study sessions! A Peek into the Study Session Next Door Self-introduction and casual chat session So, we decided to do our own "A peek into the study session next door." The date was March 12, 2024. The iOS team gathered online and in a meeting room to start their study session. Since there were new members joining that day, the theme was a casual chat session where everyone could introduce themselves. First, everyone introduced themselves, 1 minute x 18 people = about 20 minutes in total. They shared their names, the products they were in charge of, and recent updates. Although each introduction was only one minute, we made use of Slack chat comments to react, which made it an efficient introduction time for those who were participating for the first time to get to know the members’ personalities. The Learning Roadside Station team also took the opportunity to introduce ourselves. In the second half, the casual chat began. One member mentioned that Awata-san, who visited the Muromachi office yesterday, said that deploying from Slack was reaching its limit, and he suggested creating a mobile app that could integrate without needing to sign in. Then, another member proposed, “Why not develop it in our spare time? Our Mobile App Development Group has producers a火曜nd backend developers. If you're interested, we've created a Slack channel so let's talk about it there." Wow! Then, assistant manager Hinomori-san suggested, “How about developing a Learning Roadside Station app? It would be great to create an internal app. Maybe we could integrate NFTs and KTC tokens.” Yajima-san added, "How about giving points for attending study sessions?" Hinomori-san said, "What if those who accumulate points by the end of the year get some kind of reward? It sounds fun and could be a good way to work on projects that are not yet ready to be released externally. " Nakano-san added, "It might be great for internal members to develop for internal use!" A surprising positive turn for our Learning Roadside Station!!! I'm so pleased. "There's likely more we can learn from this activity beyond just writing source code." Comments flew around during the casual chat, providing hints for growth as engineers. This study session is going great, isn't it? The chat continued, and our March study session’s excitement centered around the "try! Swift Tokyo” event which will be April’s study session topic. With their assignments in hand for the next week, the iOS engineers returned to their own paths.
アバター
はじめに こんにちは!モバイルアプリ開発Gにて my routeアプリ のAndroid側の開発を担当しておりますRomieです。 KINTOテクノロジーズ株式会社(以下KTC)では、Udemy Businessのアカウントを利用して様々な講座を受講することができます! 今回は Kotlin Coroutines and Flow for Android Development を受講しました。 Androidにおける非同期処理およびCoroutinesとFlowの理解を深めるために基本的な部分をAll Englishで解説していく講座です。 学習した感想 率直な感想は以下の通りです。 英語は非常に平坦でわかりやすい Androidの用語を除いて難しい単語がほとんどない ですので、 初学者から抜け出してしっかり非同期処理およびCoroutinesとFlowを勉強されたい方・Androidの基礎とともに英語の勉強もされたい方 には非常におすすめです! 印象に残った項目 CoroutinesとFlowは従来の非同期処理と異なりメインスレッド以外で実行され、非同期処理をより簡潔に記述することができます。 また、CoroutinesとFlowはKotlinの標準ライブラリに含まれているため、追加のライブラリを導入する必要がありません。 ここだけ取り上げても非常に大きなメリットがありますね! 全て基本的な形式になりますが、備忘録も兼ねて以下に記載します。 Callback Callbackは基本的な非同期処理ですね。onResponse/onFailureで処理を分岐させることができます。 exampleCallback1()!!.enqueue(object : Callback<Any> { override fun onFailure(call: Call<Any>, t: Throwable) { println("exampleCallback1 : Error - onFailure") } override fun onResponse() { if (response.isSuccessful) { println("exampleCallback1 : Success") } else { println("exampleCallback1 : Error - isSuccessful is false") } } }) RxJava RxJavaはsubscribeByの中でonSuccess/onErrorで処理を分岐させることができます。 exampleRxJava() .flatMap { result -> example2() } .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribeBy( onSuccess = { println("Success") }, onError = { println("Error") } ) .addTo(CompositeDisposable()) async/await async/awaitで非同期処理を行い、awaitAllで結果をまとめて処理を行います。こちらは従来の非同期処理の中でもよく使われる形式ですね。 viewModelScope.launch { try { val resultAsyncAwait = awaitAll( async { exampleAsyncAwait1() }, async { exampleAsyncAwait2() }, async { exampleAsyncAwait3() } ) println("Success") } catch (exception: Exception) { println("Error") } } viewModelScope.launch { try { val resultAsyncAwait = exampleAsyncAwait() .map { result -> async { multiExampleAsyncAwait() } }.awaitAll() println("Success") } catch (exception: Exception) { println("Error") } } withTimeout withTimeoutではタイムアウト処理を行います。withTimeoutではタイムアウト時に例外が発生します。 viewModelScope.launch { try { withTimeout(1000L) { exampleWithTimeout() } println("Success") } catch (timeoutCancellationException: TimeoutCancellationException) { println("タイムアウトによるError") } catch (exception: Exception) { println("Error") } } withTimeoutOrNull withTimeoutOrNullもタイムアウト処理ですが、withTimeoutと異なりwithTimeoutOrNullではタイムアウト時にnullを返します。 viewModelScope.launch { try { val resultWithTimeoutOrNull = withTimeoutOrNull(timeout) { exampleWithTimeoutOrNull() } if (resultWithTimeoutOrNull != null) { println("Success") } else { println("タイムアウトによるError") } } catch (exception: Exception) { println("Error") } } RoomとCoroutinesによるデータベース操作 RoomとCoroutinesを組み合わせた処理ではデータベースが空かどうかを確認し、データベースに値があればinsertします。 現在のデータベースの値を取得する処理は例外が発生する可能性があるため、try/catchで囲んでいます。 現在Androidの非同期処理ではFlowと共に非常に多くの場面で使われているのではないでしょうか。 viewModelScope.launch { val resultDatabaseRoom = databaseRoom.exac() if (resultDatabaseRoom.isEmpty()) { println("データベースは空です") } else { println("データベースに値あり") } try { val examDataList = getValue() for (resultExam in examDataList) { database.insert(resultExam) } println("Success") } catch (exception: Exception) { println("Error") } } Flow 基本的なFlowですね。onStartで初期値をemitし、onCompletionで処理が完了したことをログに出力します。 sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsLiveData: LiveData<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onStart { emit(UiState.Loading) } .onCompletion { Timber.tag("Flow").d("Flow has completed.") } .asLiveData() SharedFlow/StateFlow SharedFlow/StateFlowはFlowの一種です。stateInでStateFlowに変換します。 FlowとSharedFlowの違いは、Flowがemitされた値を保持しないのに対し、SharedFlowはemitされた値を保持する点です。 StateFlowは他の2つと異なり、初期値を持ち自分自身で値の取得ができます。 SharedFlowはStateFlowと同じように値を保持しますが、複数のコレクターが値を受け取ることができます。 sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsFlow: StateFlow<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onCompletion { Timber.tag("Flow").d("Flow has completed.") }.stateIn( scope = viewModelScope, initialValue = UiState.Loading, started = SharingStarted.WhileSubscribed(stopTimeoutMillis = 5000) ) まとめ 内容自体は基本的な部分が多いですが、英語での解説が多かったため1周するのに時間がかかりました。 もっと非同期処理の全体像について理解したあとに2周目を行うと理解が深まるかと思います。 2周目では英語の勉強がメインになりそうですが、、 最後までお読みいただきありがとうございました!
アバター
Introduction Hello, my name is Rina and I’m part of the development and operations team of our product Mobility Market by KINTO at KINTO Technologies. I mainly work as a frontend engineer using Next.js. Recently, I’ve been into painting Gundam models and playing Splatoon 🎮 At KINTO Technologies, we have the opportunity to purchase work-related books at the company’s expense. These books are managed by the CIO Office and are available for employees to borrow freely. So, in this post, I'd like to share how we made the management of these purchased books easier! The Previous Book Management Method The previous method of book management was to use Confluence and manually update the lending status. The management flow was as follows: The administrator adds the purchased books to the book lending list in Confluence. Those wishing to borrow a book can select it from the book lending list in Confluence and contact the administrator via Slack to request a loan. The administrator then updates the lending status in Confluence based on Slack messages When lending books this way, Those wishing to borrow or return books must contact the administrator via Slack. The administrator has to manually update the lending status each time. This was a hassle. To simplify things, we’ve completely overhauled the way we manage our books! The New Book Management Method The new method uses JIRA workflows and Kanban-style boards, allowing everyone to see the lending status without going through the administrator. Kanban-style board The management flow was as follows: The administrator registers the purchased books as tickets in the board's library. Those wishing to borrow a book can select it from the library and change the status to "Borrowing". And that's it! By registering all purchased books on the Kanban board, the administrator can instantly view the lending status at a glance—no need for manual updates. Meanwhile, anyone wishing to borrow or return books can easily update the status themselves, without the need to contact the administrator or use Slack for these tasks. A JIRA Workflow to Simplify Tasks To create this board, we have set up the following workflow: Workflow We created three statuses, "In Library," "Borrowing," and "Discarded/Lost." By automating the transition between these statuses, we’ve minimized the need for manual input. The settings for each workflow are as follows: Check out (Changing a book’s status from "In Library" to "Borrowing") Automatically inserts current date as the lending date. Automatically assigns the borrower as the assignee in the Jira ticket. Counts the number of times the book has been borrowed. Check in (Change the status from "Borrowing" to "In Library") Automatically clears the lending and expected return dates. Automatically removes the borrower as the assignee. A Little Trick to Make Things Even Easier Get an overview of book management across all offices We’ve added icons for each location, making it easy to see which books are available in each office at a glance. You can also filter and display the management status by selecting the office type. KINTO Technologies has two offices in Tokyo and one each in Nagoya and Osaka. Previously, each office managed its books separately, but now all books can be managed centrally on a single board. Receive Slack notifications when the status changes We also use JIRA's notification feature to inform the administrator about status changes via Slack. This Slack integration has made it easier to track newly purchased books and monitor who has changed the status. Improvement Results As a result of revising the book management method, we have achieved the following benefits: For administrators: No longer need to manually update the management status of books. Can quickly view the lending status and see who has borrowed the books. Can centrally manage books that were previously managed separately by each office. For borrowers: No longer need to contact via Slack when borrowing or returning books. Can simply notify the administrator by changing the Jira ticket status (no text input required!) Conclusion In this article, we shared how we simplified our book management method. By reducing some of the hassle, we hope to make both administrators and users happier✨
アバター
Self Introduction I am Morino, team leader of the CIO Office Security Team at KINTO Technologies. My hobby is supporting Omiya Ardija, the soccer team from my childhood hometown, Omiya, which is now part of Saitama City in Saitama Prefecture. In this article, I'll be introducing our vulnerability diagnostics efforts alongside Nakatsuji-san, who is passionate about heavy metal and is the main person in charge of our vulnerability diagnostics. What is vulnerability? Let's take a moment to consider: what exactly is a vulnerability? A vulnerability refers to software bugs (defects or flaws) that compromise the CIA of information security. CIA stands for the following three terms: Confidentiality Integrity Availability Confidentiality ensures that only authorized individuals have access to specific information. For example, in an app used to view payslips, confidentiality is upheld if only HR personnel and I (as authorized individuals) can access my payslip. If a software bug allows others to view it, confidentiality is compromised. Confidentiality is maintained When only authorized individuals can view the payslip. Confidentiality is compromised When unauthorized individuals can view the payslip. Integrity ensures that information remains complete, accurate, and untampered with. Using the same payslip example, integrity is maintained if only HR personnel can delete or modify the contents of my payslip. If others can delete or alter it, integrity is compromised. Integrity is maintained When only authorized individuals can delete or edit the payslip. Integrity is compromised When unauthorized individuals can delete or edit the payslip. Availability ensures that information is accessible whenever it’s needed. For example, availability is maintained if HR personnel and I can access my payslip whenever necessary. If we cannot access the payslip when needed, availability is compromised. Availability is maintained When the payslip is always accessible Availability is compromised When the payslip is not accessible About our vulnerability diagnostics efforts The goal of vulnerability diagnostics is to identify bugs that compromise the CIA of information security. At our company, we conduct the following types of vulnerability diagnostics: Web Application Diagnostics Platform Diagnostics Smartphone Application Diagnostics Web Application Diagnostics Web application diagnostics can be broadly categorized into static and dynamic diagnostics. Static diagnostics is a method that involves identifying insecure code from the source code without running the application. Dynamic diagnostics is a method that evaluates the security of a running web application. Both types of diagnostics can be performed automatically or manually. Automated diagnostics is the process where tools automatically check the source code or web application based on predefined settings. Manual diagnostics is the process where humans manually inspect the source code or web application for vulnerabilities. Static diagnostics is also known as SAST (Static Application Security Testing), and dynamic diagnostics is known as DAST (Dynamic Application Security Testing). In our web application diagnostics, our security team primarily focuses on dynamic diagnostics but I will explain both automatic and manual methods used in dynamic diagnostics. Automated diagnostics At our company, we use an automated diagnostic tool called AppScan . For example, when diagnosing whether a web application has SQL injection vulnerabilities, we input and execute attack codes designed to trigger SQL injections in the input fields. Manually checking every input field with various attack codes is time-consuming. If the web application session expires during diagnostics, we have to log in again, and some functions require a specific sequences of screen transitions, which can be tedious. Automated diagnostic tools like AppScan handle these tasks efficiently, making them incredibly useful. Manual diagnostics For manual diagnostics, we use a tool called BurpSuite .  You might wonder why we conduct manual diagnostics when we have automated tools. The security community’s, OWASP (Open Web Application Security Project), has released OWASP Top 10 , a ranking of the most critical security risks. Injection, which ranks third in the OWASP Top 10, is something automated tools are good at detecting. These tools can thoroughly input various attack codes into fields more comprehensively than a human could. So how about the top issue on the list, broken access control? You may ask This issue is similar to the example I mentioned earlier about ensuring the confidentiality of an app used to view payslips. Unfortunately, automated tools struggle with understanding the specifics of a web application’s design and determining whether its behaviors are appropriate. Diagnosing such vulnerabilities requires a manual approach. Platform Diagnostics Platform diagnostics involve evaluating network devices such as firewalls and load balancers, as well as the configurations of servers that host web applications, including vulnerabilities in server operating systems and middleware. For platform diagnostics, we use a tool called nmap . During these diagnostics, we check for the following: ・Open unnecessary ports ・Use of vulnerable software ・Configuration issues ・Protocol-specific vulnerabilities. Reference: Guidelines for Introducing Vulnerability Diagnostics in Government Information Systems P.7 Smartphone Application Diagnostics Smartphone app diagnosis typically involve two parts: diagnostics of the app itself and diagnostics of the WebAPI. For the WebAPI, we conduct vulnerability diagnostics similar to those for web applications. For the app itself, we perform static diagnostics based on OWASP’s Mobile Application Security Testing Guide (MASTG) . For future use, we are considering utilizing MobFS , which supports both dynamic and static diagnostics for app diagnosis. Recommended Books, Resources, and Websites for Learning More About Vulnerability Diagnostics If you’ve read this far, you might be interested in learning more about vulnerability diagnostics. Here are some helpful books, documents, and websites for further study. Books How to create Secure Web Applications systematically, 2nd Edition: Understanding the principles and implementing countermeasures for vulnerabilities Commonly known as the “Tokumaru book”, is considered a foundational text for those learning about vulnerability diagnostics. It's so thick that it could be used as a blunt instrument, so if you want to carry it around, I recommend purchasing the e-book version. Documents How to Create a Secure Website by IPA. As the title suggests, this document provides information on how to create a secure website. It has fewer pages than the Tokumaru mentioned above, so I recommend it for those who are new to vulnerability diagnostics. Websites WebSecurityAcademy This is a vulnerability learning site run by PortSwigger, the developer of the vulnerability diagnostic tool, BurpSuite, mentioned above. It consists of textbook material on vulnerabilities and hacking exercises. You can learn by actually completing the exercises on your browser. Conclusion In this article, we introduced the security team's efforts in vulnerability diagnostics. Recently, it has become popular to implement WebAPIs using GraphQL rather than REST APIs. As the IT world is a place where technologies come and go quickly, we will continue to strive to collect information and improve our operations on a daily basis so that we can effectively diagnose vulnerabilities in applications built with new technologies.
アバター
KINTOテクノロジーズ(以下KTC)で my route(iOS)を開発しているRyommです。 今回、KTCは2024年8月22-24日の3日間にわたって開催されるiOSDC Japan 2024のスポンサーを はじめて します! ▼ 以下のブログもおすすめ ▼ ✨ KINTOテクノロジーズはiOSDC Japan 2024のゴールドスポンサーです ✨ なんとブースも出しちゃいます✨ 技術広報グループ・クリエイティブ室・モバイルアプリ開発グループ等たくさんの人が関わって準備してきて、楽しめるブースになっているのではないかと思います。 ぜひKTCのブースに遊びにきてください!そして、KTC(=KINTOテクノロジーズ)の名前を覚えていただけるとうれしいです! 今回スポンサーをするにあたって色々こだわって制作しました。当ブログでは、たくさんの制作物をご紹介します! くもびぃ紙クリップ こちらはノベルティBOXに封入したものです! Chimrinさんの発案で、実用的でおしゃれ!ということで制作しました。 くもびぃとは、KTNTOの公式マスコットキャラクターです。 https://corp.kinto-jp.com/mascot/profile/ パンフレットのお気に入りページに挟んだり、技術書のしおりとして使ったりできます。紙のクリップですがかなり丈夫で使いやすいです! 土台の紙を開くと...トークンが出現! パンフレット掲載原稿 ノベルティBOXに封入されているパンフレットにもKTCの広告が掲載されています! トヨタのモビリティサービスを技術面で支えるKTCのムードが伝わるような仕上がりにしてみました。 シール&シール台紙セット こちらはブースにお立ち寄りいただいたら全員に配布しているノベルティです! わたくし、Ryommのアイディアが採用されました🙌 このようなイベントでは各ブースで大量にステッカーをもらいますが、みなさまはそのステッカーはどうしているでしょうか? try! Swift Tokyo 2024にて、名札にもらったステッカーをコラージュのように貼っている方を見かけて、すごく良いな!と思い、私も真似していました。 iOSDCではクリアケースにふたつ折りの紙を入れる形式でコラージュはできないなぁと思ったので、コラージュができる台紙を用意しました! ついでにiPhone風のデザインにして...サイズも15 Proくらいのサイズにして...名札のケースに入るくらいのサイズに収めています。 ぜひ名札ケースに一緒に挟んでイベントの思い出にしていただければうれしいです。 KTCが提供しているアプリのアイコン風シールも一緒に配っていますので、こちらもぜひ台紙に貼って使ってくださいませ。 マルチカードツール こちらはブース企画のクリア記念ノベルティ①です! iOSチームでアイディアソンを行い、K.Kaneさんのアイディアが採用されました。 収納時 iOSエンジニアの方はきっと、Viewの実装の際に画面に定規をあててデザイン通りになっているか確認したことがある...あるかもしれない...ないかもしれないですが...。 そんな時も名刺サイズのこれがあれば安心!いつでもどこでも長さも角度も測ることができます。 トートバッグ かわいいかわいいくもびぃがプリントされたトートバッグです。 こちらもブース企画のクリア記念ノベルティ②です。マルチカードツールと2択なので、ぜひ何度でもブースに遊びにきてください。 iOSDCではたくさんモノを貰うので、それらをまとめられるバッグがあると良いんじゃないか?というukaさんの発案です! 結構しっかりした素材のバッグでおすすめです! ブース配布リーフレット ブースではKTCの紹介のリーフレットも配布しています。 KTCが出しているプロダクトのことを知ってほしい!という思いを込めています。 ブース企画 ブース企画では「コードみいつけた!」と称して、コードの中からお題の処理を行なっている部分を探すゲームをご用意しています! KTCの各プロダクトのチームごとに問題を用意しました。時間で問題が入れ替わりますのでお見逃しなく! 問題自体にはもちろんこだわっていますが、ブースの雰囲気を統一するために細かいところにもこだわっております! 弊社のポスターを掲示している木枠を拝借してDIYシールで黒くしたり、コード2段組が見やすくなるように背景を工夫したり、問題文もブースの雰囲気と合うようにデザインしたりしています! ロールアップバナーもこの機会に作ってみました。KINTOブルーに包まれながらブース企画にチャレンジしてみてください。 さいごに この大量の制作物の要望を受けて最高にイケてるデザインをしてくれたのはクリエイティブ室のスギモトアヤさんとアワノさんです! ノベルティの制作時には手作りの試作品を持ってきてくれたり、具体的なイメージができるように工夫してコミュニケーションを取ってくれました。 おかげでブースに遊びにきてくださるみなさまを自信を持って迎える準備ができました。 さて、いよいよ8月22日から開幕です! ロームスクエアのスポンサーブースでお待ちしております!ぜひ遊びにきてください!
アバター
Hello, I am _awache ( @_awache ), from DBRE at KINTO Technologies (KTC). In this article, I’ll provide a comprehensive overview of how I implemented a safe password rotation mechanism for database users primarily registered in Aurora MySQL, the challenges I encountered, and the peripheral developments that arose during the process. To start, here's a brief summary, as this will be a lengthy blog post. Summary Background Our company has implemented a policy requiring database users to rotate their passwords at regular intervals. Solution Considered MySQL Dual Password: To set primary and secondary passwords by using Dual Password function that is available in MySQL 8.0.14 and later. AWS Secrets Manager rotation function: To enable automatic update of passwords and strengthened security by using Secrets Manager Adopted Rotation function of AWS Secrets Manager was adopted for its easy setting and comprehensiveness. Project Kickoff At the beginning of the project, we created an inception deck and clarified key boundaries regarding cost, security, and resources. What was developed in this project Lambda functions After thorough research, we developed multiple Lambda functions because the AWS-provided rotation mechanism did not fully meet KTC's requirements. Lambda function for single user strategy Purpose: To rotate passwords for a single user Settings: Managed by Secrets Manager. These functions execute at the designated rotation times in Secrets Manager to update passwords. Lambda function for alternate users rotation strategy Purpose: This function updates passwords for two users alternately to enhance availability. Settings: Managed by Secrets Manager. In the initial rotation, a second user (a clone) is created; passwords are switched in subsequent rotations. Lambda function for Secret Rotation Notifications Purpose: this function reports the results of secret rotations. Trigger: CloudTrail events for RotationStarted, RotationSucceeded, and RotationFailed Function: To store the rotation results in DynamoDB and send notifications to Slack. Additionally, it posts a follow-up message with a timestamp to the Slack thread. Lambda function for Managing DynamoDB storage of rotation results Purpose: To store rotation results in DynamoDB as evidence for submission to the security team. Function: Executes in response to CloudTrail events to save the rotation results to DynamoDB and send SLI notifications based on the stored data. Lambda function for SLI notification Purpose: To monitor the status of rotation and to send SLI notifications. Function: Retrieves information from DynamoDB to track the progress of secret rotation and sends notifications to Slack as needed. Lambda function for rotation schedule management Purpose: To determine the execution time of rotation for a DBClusterID. Function: Generates a new schedule based on the settings of existing secret rotations, saves it to DynamoDB, and sets the rotation window and duration. Lambda function for applying rotation settings Purpose: To apply the scheduled rotation settings to Secrets Manager Function: Configures secret rotation at the designated times using information from DynamoDB. A Tool for Registering Secret Rotations We developed an additional tool to facilitate local registration of secret rotations. Tool for setting Secrets Rotation schedule Purpose: To set secret rotation schedules per database user. Function: Applies the secret rotation settings based on data saved in DynamoDB for the specified DBClusterID and DBUser. Final Architecture Overview We initially believed it could be done much simpler, but it turned out to be more complex than expected... ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview_en.png =750x) Results Automated the entire secret rotation process, reducing security and management efforts. Developed a comprehensive architecture that meets governance requirements. Leveraged secret rotation to enhance database safety and efficiency, with ongoing improvement efforts. Now, let's explore the main story. Introduction KTC has implemented a policy requiring database users to rotate their passwords at regular intervals . However, rotating passwords is not a straightforward process. To change a database user's password, the system must first be stopped. Then, the password in the database is updated, system settings files are adjusted, and finally, system operations must be verified. In other words, we need to perform a maintenance operation that provides no direct value by stopping the system just to change a database user's password. It would be highly inconvenient to perform this for every service at extremely short intervals. This article explains how we addressed this challenge through specific examples. Solution Considerations We considered two major solutions. To use functions of MySQL Dual Password To make use of the rotation function of Secrets Manager MySQL Dual Password The Dual Password function is available in MySQL starting from version 8.0.14. Using this function allows us to set both a primary and a secondary password, enabling password changes without stopping the system or applications. Simple steps to use Dual Password function are as follows: Set a new primary password. You can use the command ALTER USER 'user'@'host' IDENTIFIED BY 'new_password' RETAIN CURRENT PASSWORD; while keeping the current password as the secondary one. Update all applications to be connected with the new password. Delete the secondary password by ALTER USER 'user'@'host' DISCARD OLD PASSWORD; . Rotation function of Secrets Manager AWS Secrets Manger supports periodical automatic update of secrets. Activating secret rotation not only reduces efforts to manage passwords manually but also helps to enhance security. To activate it, one only needs to configure the rotation policy in Secrets Manager and assign a Lambda function to handle the rotation. ![Rotation setting screen](/assets/blog/authors/_awache/20240812/rotation_setting_en.png =750x) Lambda rotation function Creating the rotation function By automatically deploying the code provided by AWS, we can use it immediately without the need to create custom Lambda functions. Using rotation function from Account You can either create a custom Lambda function or select the one created earlier under 'Creating the Rotation Function' if you wish to reuse it. Rotation strategy Single user Method to rotate passwords for a single user. The database connection is maintained, allowing authentication information to be updated and reducing the risk of access denial with an appropriate retry strategy. After rotation, new connections require the updated authentication information (password). Alternate user Initially, I found it challenging to grasp the alternate user strategy, even after reading the manual. However, after careful consideration, I’ve articulated it as follows: This method alternates password updates by rotation, where the authentication information (a combination of username and password) is updated in a secret. After creating a second user (a clone) during the initial rotation, the passwords are switched in subsequent rotations. This approach is ideal for applications that require high database availability, as it ensures that valid authentication information is available even during rotations. The clone user has the same access rights as the original user. It's important to synchronize the permissions of both users when updating their access rights. Below is an image illustrating the concept explained above. Changes before and after rotation ![Before/after rotation](/assets/blog/authors/_awache/20240812/rotation_exec_en.png =750x) Though it may be a bit difficult to see, the username will have '_clone' appended during password rotation. In the first rotation, a new user with the same privileges as the existing user is created on the database side. The password will continue to be updated by reusing it in subsequent rotations after the second one. ![Alternate user](/assets/blog/authors/_awache/20240812/multi_user_rotation_en.png =750x) The Solution Adopted We decided to use rotation function by Secrets Manager for the following reasons: Easy to set up MySQL Dual Password The updated password must be applied to the application after preparing a script for the password change. Rotation function of Secrets Manager The product side does not need to modify code as long as the service consistently retrieves connection information from Secrets Manager. Comprehensiveness MySQL Dual Password Supported only in MySQL 8.0.14 and later (Aurora 3.0 or later) Secrets Manager Rotation Function Supports all RDBMS used by KTC Amazon Aurora Redshift Providing additional support beyond database passwords Can also manage API keys and other credentials used in the product. Toward the Project Kickoff Before starting the project, we first clarified our boundaries for cost, security, and resources to determine what should and shouldn’t be done. We also created an inception deck. The following is outline of what was discussed: Breakdown of responsibilities Topic Product team DBRE team Cost - Responsible for the cost of Secrets Manager for storing database passwords - Responsible for the cost associated with the secret rotation mechanism. Security - Products using this mechanism must always retrieve database connection information from Secrets Manager. - After a rotation, connection information must be updated by redeploying the application and other components until the next rotation occurs. - Ensuring that rotations are completed within the company's defined governance limits. - Providing records of secret rotations to the security team as required. - Passwords must not be stored in plain text to maintain traceability. - Sufficient security must be maintained in the mechanism used for rotation. Resources - Ensuring that all database users are managed by Secrets Manager. - Ensuring that the implementation of secret rotation resources is executed with the minimum necessary configuration. What needed to be done Execute secret rotation within the company’s defined governance limits. Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. Inception deck (an excerpt) Why are we here To develop and implement a system that complies with the company’s security policy and automatically rotates database passwords at regular intervals. To strengthen security, reduce management efforts, and ensure compliance through automation. Led by the DBRE team, to achieve safer and more efficient password management by leveraging AWS's rotation strategy. Elevator pitch Our goal is to reduce the risk of security breaches and ensure compliance. We offer a service called Secret Rotation, designed for product teams and the security group, to manage database passwords. It has functions to strengthen automatic security and reduce effort to manage, Unlike MySQL’s Dual Password feature, It is compatible with all AWS RDBMS option Through AWS services, we utilize the latest cloud technologies to provide flexible and scalable security measures that meet enterprise data protection standards. Proof of Concept (PoC) To execute the PoC we prepared the necessary resources in our testing environment, such as a DB Cluster for our own verification. We discovered that implementing the rotation mechanism through the console was straightforward, leading us to anticipate a rapid deployment of the service with high expectations. However, at that time, I had no way of knowing that trouble was just around the corner... Architecture Providing secret rotation alone is not enough without a notification mechanism for users. I’ll introduce an architecture that includes this essential feature. Secret Rotation Overview ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture_en.png =750x) Secret rotation will be managed by secrets registered in Secrets Manager. Here’s an example of a monthly update for clarity. In this case, the same password can be used for up to 2 months due to the monthly rotation schedule. During this period, you will achieve compliance with the company's rotation rules with minimal effort, aligning with any necessary deployment timing for product releases. Rotation Results to be stored at DynamoDB In Secret Rotation, a status will be written to CloudTrail as an event by the following timing: Process start; RotationStarted Process failure; RotationFailed Process end; RotationSucceeded See log entries for rotation as there are additional details available. We configured a CloudWatch Event so that the above events would serve to trigger the Lambda function for notification. Below are some of the Terraform code snippets used: cloudwatch_event_name = "${var.environment}-${var.sid}-cloudwatch-event" cloudwatch_event_description = "Secrets Manager Secrets Rotation. (For ${var.environment})" event_pattern = jsonencode({ "source" : ["aws.secretsmanager"], "$or" : [{ "detail-type" : ["AWS API Call via CloudTrail"] }, { "detail-type" : ["AWS Service Event via CloudTrail"] }], "detail" : { "eventSource" : ["secretsmanager.amazonaws.com"], "eventName" : [ "RotationStarted", "RotationFailed", "RotationSucceeded", "TestRotationStarted", "TestRotationSucceeded", "TestRotationFailed" ] } }) Stored rotation results can be used as evidence for submission to the security team. The architecture reflecting the components discussed so far is as follows: ![Architecture only for Secret Rotation](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture2_en.png =750x) AWS resources needed for providing functions Lambda function for applying alternate user strategy (Different Lambda functions are required for MySQL and Redshift.) Lambda function for alternate user to be set at Secrets Manager We developed this in-house to meet company rules for infrastructure compliance. We encountered several elements that automatically-generated Lambda functions could not address, such as Lambda function settings and IAM configurations. Lambda function to apply single user strategy (Different Lambda is needed for MySQL and Redshift respectively) Lambda function for single user to be set at Secrets Manager A password for administrator user cannot be applied with alternate user strategy Lambda function for Secret Rotation Notifications A mechanism to notify that it has been rotated by Secret Rotation must be prepared by ourselves. As CloudTrail is stored with the status and results, we can use them as a trigger to notify to Slack. Be careful that Lambda will be executed individually when executed by an event trigger. DynamoDB for storing rotation results Results of rotation to be stored in DynamoDB Additionally, the timestamp is stored in the Slack thread to clarify which notification it is related to. Why we chose to manage the Lambda function for secret rotation ourselves As a prerequisite, we use AWS-provided Lambda. Since AWS provides the ability to automatically deploy code, we can use it immediately without the need to create individual Lambda functions. However, we deploy it using Terraform after committing the code set to our repository. Main reasons for this are as follows: Multiple services exist within KTC's AWS account. When several services exist in the same AWS account, IAM’s privilege becomes too strong Also, services are provided across regions As Lambda cannot be executed in cross-region, the same code must be deployed to regions by using Terraform. We have a large number of database users that require Secret Rotation settings. Number of database clusters Below 200; Number of database users Below 1000 The workload would be overwhelming if we manually built the system for each secret. Applying Company Rules It calls for setting of Tag in addition to IAM Automatic and individual creation will require setting up of Tag subsequently AWS-provided code will be updated periodically. Since the codes are provided by AWS, this inevitably happens. There is a possibility that this will lead to a trouble by chance I have written several matters so far, but in a nutshell, it was more convenient for us to manage the codes in consideration of the in-company rules. How we managed Lambda functions for Secrets Rotation This was really a hard job. At the beginning, we thought it would go easily as AWS provided samples of Lambda codes . But we saw many kinds of errors after deploying based on them. While we had some success during our own verification, we faced significant challenges when errors occurred in specific database clusters. However, we discovered that the automatically generated code from the console was error-free and remained stable, highlighting the need to use it effectively. There are several approaches, but let me share the one we tried. Exploring how to deploy from a sample code We can see the code itself from the above mentioned link However, it is hard to match all the necessary modules including version. Besides, this Lambda code is frequency updated and we have to follow up. We gave up this approach as it was a hard job. Then, we realized it would be better off if make it inhouse with other method as long as we need to control this code. Download the Lambda code after automatically generating the Secret Rotation function from the console. This method is to generate code automatically every time, download it to local to apply it to our Lambda. It is not too difficult to do. However, there is a chance that existing and working code may change from a downloaded code by timing of automatic code generation. This approach would have worked, but we found it burdensome to deploy every time the code needed updates. Verify how it was deployed from the CloudFormation template used behind the scenes when the Secret Rotation function is automatically generated from the console. When automatically generated from the console, AWS CloudFormation operates in the background. By examining the template at this stage, we can obtain the S3 path of the code automatically generated by AWS. We adopted the third method above as it was the most efficient way to directly obtain the Zip file from S3, eliminating the need to generate Secret Rotation code each time. The actual script to download from S3 are as follows: #!/bin/bash set -eu -o pipefail # Navigate to the script directory cd "$(dirname "$0")" source secrets_rotation.conf # Function to download and extract the Lambda function from S3 download_and_extract_lambda_function() { local s3_path="$1" local target_dir="../lambda-code/$2" local dist_dir="${target_dir}/dist" echo "Downloading ${s3_path} to ${target_dir}/lambda_function.zip..." # Remove existing lambda_function.zip and dist directory rm -f "${target_dir}/lambda_function.zip" rm -rf "${dist_dir}" if ! aws s3 cp "${s3_path}" "${target_dir}/lambda_function.zip"; then echo "Error: Failed to download file from S3." exit 1 fi echo "Download complete." echo "Extracting lambda_function.zip to ${dist_dir}..." mkdir -p "${dist_dir}" unzip -o "${target_dir}/lambda_function.zip" -d "${dist_dir}" cp -p "${target_dir}/lambda_function.zip" "${dist_dir}/lambda_function.zip" echo "Extraction complete." } # Create directories if they don't exist mkdir -p ../lambda-code/mysql-single-user mkdir -p ../lambda-code/mysql-multi-user mkdir -p ../lambda-code/redshift-single-user mkdir -p ../lambda-code/redshift-multi-user # Download and extract Lambda functions download_and_extract_lambda_function "${MYSQL_SINGLE_USER_S3_PATH}" "mysql-single-user" download_and_extract_lambda_function "${MYSQL_MULTI_USER_S3_PATH}" "mysql-multi-user" download_and_extract_lambda_function "${REDSHIFT_SINGLE_USER_S3_PATH}" "redshift-single-user" download_and_extract_lambda_function "${REDSHIFT_MULTI_USER_S3_PATH}" "redshift-multi-user" echo "Build complete." By running this script at the time of deployment, the code can be updated. Conversely, the conventional code can be used continuously unless running this script. Lambda function and Dynamo DB to notify results of Secret Rotation A notification of Secret Rotation results is executed with PUT of CloudTrail as a trigger. We considered modifying the Lambda function for Secret Rotation to simplify things. However, this would have complicated explaining our effort to fully utilize the code provided by AWS. Before starting development, I initially thought all we needed was to use a PUT trigger for notifications. But, things were not that easy. Let’s see the whole picture again. ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture_en.png =750x) Its notification process involves creating a Slack notification thread at the start and adding a postscript to the thread when the notification is completed. ![Slack Notification](/assets/blog/authors/_awache/20240812/slack_notification.png =750x) Events we use this time are as follows: Event at the start of the processing Event of PUT to Cloud Trail RotationStarted Event at the end of the processing Event of PUT to Cloud Trail when the processing succeeds RotationSucceeded Event of PUT to Cloud Trail when the processing fails RotationSucceeded On the occasion of RotationStarted, an event at the start of the processing, its Slack time stamp can be stored in DynamoDB and we can add postscripts on the thread by using it. Considering these, we had to examine by which unit DynamoDB would become unique. Consequently, we chose to combine SecretID of Secrets Manager and scheduled date of the next rotation to make it unique. Main structure of columns of DynamoDB is as follows: (In actual, more information is being stored in them) SecretID: Partition key NextRotationDate: Sort key Schedule of the next rotation; Obtainable with describe SlackTS: Time stamp sent first by Slack at the event of RotationStarted Using this time stamp, we can add postscript on the Slack thread. VersionID: Version of SecretID at the event of RotationStarted By keeping the last version to reverse to the previous state at once if a trouble happens, it is possible to restore the password information before the rotation The biggest challenge we faced was that multiple Lambda functions were triggered in steps due to several PUT events being activated during a single Secret Rotation process. Even though i understood this in theory, it proved to be extremely troublesome. We had to pay attention to the following consequently: Processing of Secret Rotation itself is a very high-speed one. Since the timing of PUT to Cloud Trail is almost identical for RotationStarted and RotationSucceeded (or RotationFailed), the execution of Lambda for notification will take place twice, almost simultaneously. But Lambda for notification also handles Slack notification and DynamoDB registration, an event at the processing end may run before the RotationStarted process completes. When this happens, a new script will be added to Slack without knowing the destination thread. To solve this, we chose a simpler approach where processing to notify Slack should be halted for a couple of seconds in case of the name of event is other than RotationStarted. Secret Rotation may fail due to an error of setting and such. In most cases, a product will not be affected by this at once as it becomes an error before DB password updating. In such a case, a recovery can be executed with the following command. # VersionIdsToStages obtains the version ID of AWSPENDING $ aws secretsmanager describe-secret --secret-id ${secret_id} --region ${region} - - - - - - - - - - Output sample of Versions - - - - - - - - - - "Versions": [ { "VersionId": "7c9c0193-33c8-3bae-9vko-4129589p114bb", "VersionStages": [ "AWSCURRENT" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:12.893000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] }, { "VersionId": "cb804c1c-6d1r-4ii3-o48b-17f638469318", "VersionStages": [ "AWSPENDING" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:22.616000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] } ], - - - - - - - - - - - - - - - - - - - - - - - - # Delete the subject version $ aws secretsmanager update-secret-version-stage --secret-id ${secret_id} --remove-from-version-id ${version_id} --version-stage AWSPENDING --region ${region} # From the console, to make the subject secret “rotate at once” Although this has not occurred, if the database password is changed due to an issue, we execute the following command to retrieve the previous password. Since we also use alternate user rotation, it doesn't immediately disable product access to the database. We believe it won't be an issue until the next rotation is executed. $ aws secretsmanager get-secret-value --secret-id ${secret_id} --version-id ${version_id} --region ${region} --query 'SecretString' --output text | jq . For # user and password, we will set a parameter obtained by aws secretsmanager get-secret-value $ mysql --defaults-extra-file=/tmp/.$DB username for administration}.cnf -e "ALTER USER ${user} IDENTIFIED BY '${password}' # Check connection $ mysql --defaults-extra-file=/tmp/.user.cnf -e "STATUS" As for the things to do up to here, we were able prepare a foundation to achieve the following: Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Our battle did not stop here Although we could prepare the major functions as described, we identified three additional tasks that we needed to address. Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. In order to achieve them, we had to develop peripheral functions. To build a mechanism to monitor the degree of compliance has been observed for the standard of the governance constrains defined by the company What we should do in this is, in a nutshell, to obtain lists of all users existing in every DB Cluster, and to check if dates of password updating for every user should be within a duration required by corporate governance. We can obtain the latest password updating date of every user after logging in each DB Cluster and executing the following query. mysql> SELECT User, password_last_changed FROM mysql.user; +----------------+-----------------------+ | User | password_last_changed | +----------------+-----------------------+ | rot_test | 2024-06-12 07:08:40 | | rot_test_clone | 2024-07-10 07:09:10 | : : : : : : : : +----------------+-----------------------+ 10 rows in set (0.00 sec) This should be executed in every DB Cluster. However, we have already obtained metadata of all DB Clusters every day and automatically generated Entity Relationship Diagram and my.cnf, and executed a scrip to check if there is any inappropriate settings in database. We could solve this simply by adding a processing to obtain lists of users and the latest password updating dates to save them in DynamoDB. Main structure of columns of DynamoDB is as follows: DBClusterID: Partition key DBUserName: Sort key PasswordLastChanged: Latest password updating date In practice, Users automatically generated for the use of RDS but we cannot not control Users with the name of “_clone” generated by Secret Rotation function The above users should be excluded. For this reason, we obtain the really necessary data by the following query. SELECT CONCAT_WS(',', IF(RIGHT(User, 6) = '_clone', LEFT(User, LENGTH(User) - 6), User), Host, password_last_changed) FROM mysql.user WHERE User NOT IN ('AWS_COMPREHEND_ACCESS', 'AWS_LAMBDA_ACCESS', 'AWS_LOAD_S3_ACCESS', 'AWS_SAGEMAKER_ACCESS', 'AWS_SELECT_S3_ACCESS', 'AWS_BEDROCK_ACCESS', 'rds_superuser_role', 'mysql.infoschema', 'mysql.session', 'mysql.sys', 'rdsadmin', ''); In addition, we prepared a Lambda for SLI to gather information of DynamoDB. Consequently, the output is like this: ![SLI notification](/assets/blog/authors/_awache/20240812/sli.png =750x) Its output content is as follows: Total Items: The number of all users existing in all DB Clusters Secrets Exist Ratio: Ratio of SecretIDs that comply with the naming rule for Secrets Manager used in KINTO Technologies Rotation Enabled Ratio: Ratio of activated Secret Rotation functions Password Change Due Ratio: Ratio of users who comply with the corporate governance rule The important thing is to make Password Change Due Ratio 100%, There is no need to depend on Secret Rotation function as long as this ratio is 100%. With this SLI notification mechanism, we can achieve the following: Monitor compliance with the company’s governance standards. A mechanism to synchronize rotation timing with the schedule set by users registered in the same DB Cluster. We had to write two code sets to realize this mechanism. A mechanism to decide the execution time of rotation for a DBClusterID. A mechanism to set a rotation on Secrets Manager by the time determined by the above Each of these is described below. The mechanism to decide the execution time of rotation for a DBClusterID. On the assumption, execution time of Secret Rotation can be described by a schedule called rotation window . Description and the usage of rotation window can be summarized into two as follows: rate equation This is used when we want to set a rotation interval by a designated number of days cron equation This is used when we want to set a rotation interval in detail such as specific day of the week or time. We decided to use cron equation as we wanted to execute our setting in daytime of weekdays. Another point to set is “window duration” of a rotation. By combining these two, we can control the execution timing of a rotation to some extent. The relation between rotation window and window duration is as follows: Rotation window means the time when a rotation ends, not starts Window duration determines allowance for execution against the set up time by the rotation window Window duration’s default is 24 hours That means, if the rotation window is set at 10:00AM of the fourth Tuesday every month but the widow duration is not specified (24 hours), the timing for Secret Rotation will be executed sometime between 10:00AM of the fourth Monday and 10:00AM of the fourth Tuesday every month, as a case. This is hard to follow intuitively. But, if we don’t get this relationship, Secret Rotation may be executed at unexpected timing. With those assumption in mind, we determined the requirement as follows: Rotation for DB users by DBClusterID will be executed at the same timezone Window duration is for three hours Setting by too short timing may lead to see problems occurring simultaneously during a timezone from a trouble reporting to its recovery Timing of the execution is set at between 09:00 to 18:00 of weekdays Tuesdays to Fridays We don’t execute on Mondays as it is more likely that a public holiday falls on that day. As the window duration is going to be fixed as three hours, what can be set in cron equation is six hours between 12:00-18:00 Only UTC can be set in cron equation Timings of execution should be dispersed as much as possible This is because many Secret Rotations run at the same timing, restrictions of various API may be affected. And if an error of some kind may occur, many alerts will be activated and we cannot respond to them at the same time The whole flow of Lambda processing will be as follows: Data acquisition : Acquire a DBClusterID list from DynamoDB Acquire setting information of existing Secret Rotation from DynamoDB Generation of schedule Initialize all combination (slots) of week, day and hour Check if the subject DBClusterID does not exist in the setting information of existing Secret Rotation If it exists, embed DBClusterID in the same slot of setting information of existing Secret Rotation Distribute new DBClusterID to slots evenly Add new data to empty slot and if it is not empty, add data to the next slot Execute repeatedly until the last one of DBClusterID list Storing data : Data is stored after filtering setting information of the new Secret Rotation that does not duplicate with the existing data. Error handing and notification : When a serious error occurs, an error message is sent to Slack for notification. Then, DynamoDB’s column to be stored is as follows: DBClusterID: Partition key CronExpression: cron equation to set at Secret Rotation It’s a bit hard to follow, but we make a state as follows, as an image: ![Slot putting in image](/assets/blog/authors/_awache/20240812/decide_en.png =750x) A mechanism to decide the execution time of rotation for a DBClusterID up to here. However, this doesn’t work to set up the actual Secret Rotation. Then, we need a real mechanism to set up Secret Rotation. The mechanism to set a rotation on Secrets Manager by the time determined by the above We don’t believe that a mechanism of Secret Rotation is the only means to keep the corporate governance. More important thing is to see compliance with the governance standard defined by the company Accordingly, instead of enforcing to use this mechanism, we need a mechanism that make our users want to use it as the safest and simplest one conceived by DBRE. Perhaps, we may find such requests from the users in DBCluster, like one user wishes to use Secret Rotation, while the other use insists to manage by themselves with different method. To satisfy such requests, we will need a command line tool for setting of Secret Rotation in the unit of database user linked to DBClusterID required. We have been developing a tool called dbre-toolkit for converting our daily work to command lines as DBRE. This is a package of tools such as the one to execute Point In Time Restore easily, the one to acquire DB connecting users in Secrets Manager to create defaults-extra-file . This time, we added a subcommand here: % dbre-toolkit secrets-rotation -h 2024/08/01 20:51:12 dbre-toolkit version: 0.0.1 It is a command to set Secrets Rotation based on Secrets Rotation schedule linked to a designated Aurora Cluster. Usage: dbre-toolkit secrets-rotation [flags] Flags: -d, --DBClusterId string [Required] DBClusterId of the subject service -u, --DBUser string [Required] a subject DBUser -h, --help help for secrets-rotation It was intended to complete a setting of Secret Rotation by registering the information to Secrets Manager after acquiring a combination of DBClusterID and DBUser as designated from DynamoDB. We could achieve the following with this: Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. We completed what we had decided finally by doing all these. Conclusion Here’s what we have achieved: We developed a mechanism to detect and notify relevant product teams about the start, completion, success, or failure of a secret rotation. This involved creating a system to detect CloudTrail PUT events and notifying appropriately. We ensured recovery from failed secret rotations without affecting the product. We prepared steps to handle potential issues. We found that understanding how Secret Rotation works helps minimize the risk of fatal errors. We executed secret rotations within the company’s defined governance limits. Through developing a mechanism for SLI notification. We implemented a mechanism to perform secret rotation within the company’s defined governance limits. We synchronized rotation timing with schedules set by users registered in the same DB Cluster. We developed a mechanism to store cron expressions to DynamoDB as an equation for setting to Secret Rotation in the unit of DBClusterID. We enhanced compliance monitoring according to the company’s governance. Through developing a mechanism for SLI notification. The whole image became like this: ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview_en.png =750x) The overall architecture turned out more complex than we initially imagined. In other words, we expected Secret Rotation management to be simpler. The function of Secret Rotation provided by AWS is very effective if you simply use it. However, we discovered that we needed to prepare many elements in-house because the out-of-the-box solution did not fully meet our requirements. We went through numerous trials and errors to reach this point. In the future, we aim to create a corporate environment where everyone can seamlessly use the KTC database with the Secret Rotation mechanism we've developed. Our goal is to ensure the database remains safe and continuously available. KINTO Technologies’ DBRE team is currently recruiting new team mates! We welcome casual interviews as well. If you're interested, please feel free to contact us via DM on X . In addition, we wish you to follow our corporate exclusive X account for recruitment !
アバター