TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

Introduction Hello, and thank you for visiting. My name is ITOYU and I am in charge of front-end development in the new car subscription development group of the KINTO ONE Development Department. Nowadays, it is common to use frameworks such as Vue.js, React, and Angular when creating web applications. The New Car Subscription Development Group is also using React and Next.js for development. Libraries and frameworks are frequently updated, such as the release of React version 19 and Next.js version 15. Each time that happens, you need to get up to speed with the new features and changes, and update your knowledge. In addition, the evolution of front-end technology has been remarkable in recent years. Libraries and frameworks that were in use until a few months ago become obsolete in a matter of months, and it is not uncommon for new libraries and frameworks to appear either. With these being the circumstances, front-end developers need to be on constant lookout out for new technologies, libraries, and frameworks, gather information, and keep on learning. This is the rule of front-end development, and also the joy of front-end development. With passion and insatiable curiosity, front-end developers seek to learn and master new technologies, libraries, and frameworks to improve their skills, develop better web applications efficiently, pursue best practices, and become front-end gurus . However, at the root of the libraries and frameworks in a front end, there is JavaScript. Do we really understand JavaScript 100% and use it with total mastery? Is it really possible to master libraries and frameworks without fully understanding JavaScript's core features? Can we really call ourselves front-end gurus? Personally, I cannot confidently answer “yes” to that question. So, in order to become a front-end expert, I decided to relearn JavaScript and fill in the gaps in my knowledge. The purpose of this article As a first step, goal is to learn about the basic JavaScript concept of Scope and understand it more deeply. You may think this is too basic! I'm sure most front-end engineers use scope without even thinking about it. However, when it comes to putting the concept of scope and the knowledge and names related to it into words, it is surprisingly difficult to do. This article aims to provide a better understanding of the types of scopes to help you understand the concept of scope. Reading it will most likely not furnish you with any new implementation methods. However, understanding the concept of scope should help you understand how JavaScript behaves, and lay the foundations for writing better code. :::message The JavaScript code and concepts contained in this article are explained based on the assumption that they will be running in a browser. Please be aware that they might produce different behavior in a different environment (such as Node.js). ::: Scope In JavaScript, there is a concept called Scope. Scope is the range within which variables and functions can be referenced by running code . First, let us take a look at the following types of scope: Global scope Function scope Block scope Module scope Global scope Global scope refers to the scope that can be accessed from anywhere within the program. The way to make a variable or function have global scope is roughly as follows: Variables added to the properties of the global object Variables that have script scope Variables added to the properties of the global object You can give variables and functions global scope by adding them to the properties of the global object. The global object differs depending on the environment; in a browser environment it is the window object, and in a Node.js environment it is the global object. In this example, I will assume you are coding for a browser environment, and show you how to add properties to the “window” object. The way to do it is to declare variables and functions with “var.” Variables and functions declared with “var” get added as properties of the global object, and can be referenced from anywhere. // A variable added to the properties of the “window” object var name = 'KINTO'; console.log(window.name); // KINTO You can also omit the window object when calling variables added to the global object. // Calling a variable by omitting the “window” object var name = 'KINTO'; console.log(name); // KINTO Script-scoped variables Script scope is the scope in which variables and functions declared at the top level of a JavaScript file or at the top level of a script element can be accessed. Variables and functions declared at the top level with “let” or “const” will have script scope. <! -- Variables that have script scope --> <script> let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation </script> Top level “Top level” means outside any functions or blocks. This explanation of top-level declarations may seem confusing, so let's take a look at the following examples to see the difference between variables declared at the top level and those that are not: <!-- Variables that have been declared at the top level --> <script> let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation </script> <! -- Variables that have not been declared at the top level --> <script> const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined if (true) { const company = ’KINTO Technologies Corporation’; console.log(company); // KINTO Technologies Corporation } console.log(company); // ReferenceError: company is not defined </script> In the code above, the variable name declared in the function getCompany and the variable company declared in the if statement can only be referenced from within the function itself or the “if” statement’s block. Differences between the global object and script scope Variables declared at the top level with “let” or “const” will have global scope and can be referenced anywhere, just like ones declared with “var.” However, unlike variables declared with “var,” ones declared with “let” or “const” do not get added to the properties of the global object. // Variables declared with “let” or “const” do not get added to the properties of the global object let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(window.name); // undefined console.log(window.company); // undefined :::message Handle the global object with care: Adding variables and functions to the properties of the global object using “var” should be avoided, because it can cause the global object to get contaminated. This is because having the same variable and function names appearing in different scripts can lead to unexpected behavior. So, if you want to make a variable have global scope, the recommended way is to declare it with “let” or “const.” ::: Function scope As mentioned in the previous example of variables without script scope, variables and functions declared within curly brackets {} within a function can only be referenced within that function. This is called ** function scope **. const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined Since the variable “name” is declared inside the function “getCompany,” it can only be referenced inside that function. So if you try to reference the name variable from outside the function, an error will occur. Block scope The example above showing variables that do not have script scope also featured variables and functions declared within a range enclosed by curly brackets. These can only be referenced within that block. This is called block scope . if (true) { let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation } console.log(name); // ReferenceError: name is not defined console.log(company); // ReferenceError: company is not defined Variables declared with “let” or “const” like this will have block scope. Variables declared inside curly brackets can only be referenced inside the curly brackets. :::message Function declarations and block scope Functions declared inside a block will not have block scope, so they can also be referenced from outside the scope. Note: The results can vary depending on the JavaScript version and runtime environment. if (true) { function greet() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // Hello, KINTO So, if you want to make a function have block scope, the recommended way is to declare a variable with block scope, and assign the function to that. if (true) { const greet = function() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // ReferenceError: greet is not defined ::: Module scope Module scope is the referenceable scope of variables and functions declared inside a module. This means that variables and functions inside a module can only be accessed within that module, and cannot be referenced directly from outside it. In order to reference variables or functions declared inside a module from outside it, you need to expose them to the outside using export , and import them into the file using import . For example, I will declare some variables in the file module.js as follows: // module.js export const name = 'KINTO'; export const company = ’KINTO Technologies Corporation’; const category = ’subscription service’; // This variable has not been exported, so it cannot be referenced from outside. The exported variables can be referenced by importing them into other files. // Calling variables with module scope import { name, company } from './module.js'; console.log(name); // Output: KINTO console.log(company); // Output: KINTO Technologies Corporation // This line causes an error because `category` has not been exported. console.log(category); // ReferenceError: category is not defined Trying to reference a variable that has not been exported will generate an error. This is because the module scope is hiding the variable from the outside. // Calling a variable that does not have module scope import { category } from './module.js'; // SyntaxError: The requested module './module.js' does not provide an export named 'category' console.log(category); // The import fails, so this line cannot be run. This shows how understanding module scope is extremely important for managing dependencies between modules in JavaScript. Summary Scope is the range within which variables and functions can be referenced by running code. Global scope is the scope that can be referenced from anywhere. Script scope is the referenceable scope of variables and functions declared at the top level of either a JavaScript file or “script” element. Function scope is the referenceable scope of variables and functions declared inside curly brackets enclosed within a function. Block scope is the referenceable scope of variables and functions declared within a range enclosed by curly brackets. Module scope is the scope that is only referenceable inside a module. This time, we explored the different types of scope in JavaScript. In the next article, I'll introduce additional concepts related to scope.
アバター
Introduction Hello! I'm Cui from the Global Development Division at KINTO Technologies. I'm currently involved in the development of KINTO FACTORY , and this year, I collaborated with team members to investigate the cause of memory leaks in our web service, identify the issues, and implement fixes to resolve them. This blog post will outline the investigation approach, the tools utilized, the results obtained, and the measures implemented to address the memory leaks. Background The KINTO FACTORY site that we are currently developing and managing operates a web service hosted on AWS ECS. This service utilizes a member platform, an authentication service, and a payment platform, a payment processing service, both of which have been developed and managed by our company. In January of this year, the CPU utilization of the ECS task for this web service spiked abnormally, leading to a temporary outage and making the service inaccessible. During this time, an incident occurred where a 404 error and an error dialog were displayed during certain screen transitions or operations on the KINTO FACTORY site. A similar memory leak occurred last July, which was traced to frequent Full GCs (cleanup of the old generation), leading to a significant increase in CPU utilization. In such cases, a temporary solution is to restart the ECS task. However, it is crucial to identify and resolve the root cause of the memory leak to prevent recurrence This article outlines the investigation and analysis of these events, offering solutions based on the findings from these cases. Summary of Investigation Findings and Results Details of Investigation First, a detailed analysis of the event that occurred in this case revealed that the abnormally high CPU utilization of the web service was a problem caused by frequent Full GCs (cleanup of the old generation). Typically, after a Full GC is performed, a significant amount of memory is freed, and it shouldn't need to occur again for some time. Nonetheless, the frequent occurrence of Full GCs is likely to be excessive consumption of memory in use, suggesting that a memory leak is occurring. To test this hypothesis, we replicated the memory leak by continuously calling the APIs over an extended period, focusing primarily on those that were frequently called during the timeframe when the memory leak occurred. test this hypothesis, we reproduced the memory leak by continuously calling APIs for an extended period, mainly for those that were frequently called during the period when the memory leak occurred. The memory status and dumps were then analyzed to pinpoint the root cause of the issue. The tools used for the investigation were: API traffic simulation with JMeter Monitoring memory state using VisualVM and Grafana (local and verification environments) Filtering frequently called APIs with OpenSearch Additionally, here's a brief explanation of the "old generation" memory frequently mentioned: In Java memory management, the heap is divided into two parts, the young generation and the old generation. The young generation consists of newly created objects. Objects that persist for a certain duration in this space are gradually moved through the survivor spaces into the old generation. The old generation stores long-lived objects, and when it becomes full, a Full GC will occur. The survivor spaces are part of the young generation that track how long objects have survived. Result A significant number of new connection instances were being created during external service requests, leading to memory leaks caused by excessive and unnecessary memory consumption. Details of Investigation 1. Identify frequently called APIs To get started, we created a dashboard in OpenSearch summarizing API calls to understand the most frequently called processes and their memory usage. 2. Continue invoking the identified APIs in the local environment for 30 minutes, and afterward, analyze the results. Investigation Method To reproduce the memory leak in the local environment and capture a memory dump for root cause analysis, we used JMeter with the following settings to call the APIs continuously for 30 minutes. JMeter settings Number of threads: 100 Ramp-up period*: 300 seconds Test environment Mac OS Java version: OpenJDK 17.0.7 2023-04-18 LTS Java configuration: -Xms1024m -Xmx3072m *Ramp-up period is the amount of time in seconds during which the specified number of threads will be started and executed. Result and hypothesis No memory leak occurred. We assumed that the memory leak was not reproduced because it was different from the actual environment. Since the actual environment runs on Docker, we decided to put the application in a Docker container and validate it again. 3. Continue calling the APIs again in the Docker environment, then analyze the results Investigation Method To reproduce the memory leak in the local environment, we used JMeter with the following settings and kept calling the APIs for one hour. JMeter settings Number of threads: 100 Ramp-up period: 300 seconds Test environment Local Docker container on Mac Memory limits: 4 GB CPU limits: 4 cores Results No memory leak occurred even if the environment is changed in the local environment. Hypothesis Different from actual environment No external APIs are being called API calls for an extended period may gradually accumulate memory Objects that are too large may not fit into survivor spaces and end up in the old generation Since the issue could not be reproduced in the local environment, we decided to re-validate it in a verification environment that closely mirrors the production environment. 4. Continue making requests to the relevant external APIs in the verification environment for an extended period, and then analyze the results for any anomalies or issues. Investigation Method To reproduce the memory leak in the verification environment, we used JMeter with the following settings and kept calling the APIs. Called APIs: total 7 Duration: 5 hours Number of users: 2 Loop: 200 (Planned 1000, but changed to 200 due to low actual orders) Total Factory API calls: 4000 Affected external platforms: Member platform (1600), Payment platform (200) Results No Full GC occurred, and memory leak was not reproduced. Hypothesis No Full GC was triggered because the number of loops was low. While memory usage was increasing, it had not yet reached the upper threshold. We will increase the number of API calls and reduce the memory limit to trigger a Full GC for further analysis. 5. Reduce the memory limit and continue hitting the APIs over an extended period to observe memory behavior and potential GC activity. Investigation Method We lowered the memory limit in the verification environment and kept calling the member platform-related APIs in JMeter for 4 hours. Duration: 4 hours APIs: 7 APIs the same as last time Frequency: 12 loops per minute (5 seconds per loop) Member platform call frequency: 84 times per minute Number of member platform calls in 4 hours: 20164 Dump acquisition settings: export APPLICATION_JAVA_DUMP_OPTIONS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/app/ -XX:OnOutOfMemoryError="stop-java %p;" -XX:OnError="stop-java %p;" -XX:ErrorFile=/var/log/app/hs_err_%p.log -Xlog:gc*=info:file=/var/log/app/gc_%t.log:time,uptime,level,tags:filecount=5,filesize=10m' ECS memory limit settings: export APPLICATION_JAVA_TOOL_OPTIONS='-Xms512m -Xmx512m -XX:MaxMetaspaceSize=256m -XX:MetaspaceSize=256m -Xss1024k -XX:MaxDirectMemorySize=32m -XX:-UseCodeCacheFlushing -XX:InitialCodeCacheSize=128m -XX:ReservedCodeCacheSize=128m --illegal-access=deny' Results The memory leak was successfully reproduced and a dump was obtained. If you open the dump file in IntelliJ IDEA, you can see detailed memory information. A detailed analysis of the dump file revealed that a significant number of new objects were being created with each external API request. Additionally, some utility classes were not being managed as singletons, contributing to the issue. 6. Heap dump analysis results We found that 5,410 HashMap$Node were created in reactor.netty.http.HttpResources , which occupies 352,963,672 bytes (83.09%). Identification of memory leak location There is a leak in channelPools(ConcurrentHashMap) in reactor.netty.resources.PooledConnectionProvider , and we focused on the storing and retrieving logic. poolFactory(InstrumentedPool) Retrieving location Create holder(PoolKey) with channelHash obtained from remote(Supplier<?extends SocketAddress>) and config(HttpClientConfig) Retrieve poolFactory(InstrumentedPool) from channelPools with holder(PoolKey) . Return an existing similar key if it exists, or create a new one if not. The cause of the leak is that the same setting is not considered the same key: reactor.netty.resources.PooledConnectionProvider public abstract class PooledConnectionProvider<T extends Connection> implements ConnectionProvider { ... @Override public final Mono<? extends Connection> acquire( TransportConfig config, ConnectionObserver connectionObserver, @Nullable Supplier<? extends SocketAddress> remote, @Nullable AddressResolverGroup<?> resolverGroup) { ... return Mono.create(sink -> { SocketAddress remoteAddress = Objects.requireNonNull(remote.get(), "Remote Address supplier returned null"); PoolKey holder = new PoolKey(remoteAddress, config.channelHash()); PoolFactory<T> poolFactory = poolFactory(remoteAddress); InstrumentedPool<T> pool = MapUtils.computeIfAbsent(channelPools, holder, poolKey -> { if (log.isDebugEnabled()) { log.debug("Creating a new [{}] client pool [{}] for [{}]", name, poolFactory, remoteAddress); } InstrumentedPool<T> newPool = createPool(config, poolFactory, remoteAddress, resolverGroup); ... return newPool; }); As the name suggests, channelPools are objects that hold channel information and are reused when similar requests come in. The PoolKey is created based on the hostname and HashCode in the connection settings, and the HashCode is further used. channelHash Retrieving location Hierarchy of reactor.netty.http.client.HttpClientConfig Object + TransportConfig + ClientTransportConfig + HttpClientConfig Lambda expression com.kinto_jp.factory.common.adapter.HttpSupport passed to PooledConnectionProvider L5 the Lambda expression defined here is passed to PooledConnectionProvider as config#doOnChannelInit . abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } 7. Behavior when retrieving channelPools (illustration) Key match case (Normal) The information present in channelPools is the key and InstrumentedPool is reused. Key mismatch case (Normal) The information that does not exist in channelPools is the key and InstrumentedPool is newly created. This case (Abnormal) The information present in channelPools is the key, but InstrumentedPool is not reused and is newly created. Correction and verification of problems Correction Rewrite the Lambda expression in question to a property call Before correction abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } After correction abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connTimeout) .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } Verification Prerequisites Call MembersHttpSupport#members(memberId: String) 1000 times. Check the number of objects stored in PooledConnectionProvider#channelPools . Results before correction When executed in the state before correction, we found that 1000 objects were stored in PooledConnectionProvider#channelPools (cause of the leak). Results after correction When executed in the state after correction, we found that one object is stored in PooledConnectionProvider#channelPools (leak resolved). Summary Through this investigation, we successfully identified the cause of the memory leak in KINTO FACTORY's web service and resolved the issue by implementing the necessary corrections. Specifically, the memory leak was caused by the creation of a large number of new objects during external API calls. This issue was resolved by replacing the Lambda expression with a property call, which reduced object creation. Through this project, the following important lessons were learned: Continuous monitoring : We recognized the importance of continuous monitoring through abnormal ECS service CPU utilization and frequent Full GCs. By continuously monitoring system performance, potential issues can be detected early and addressed promptly, preventing them from escalating. Early problem identification and countermeasures : By suspecting memory leaks in the web service and repeatedly calling the APIs over an extended period to reproduce the issue, we identified that a large number of new objects were being created during external service requests. This allowed us to quickly identify the cause of the issue and implement appropriate corrections. Importance of teamwork : When tackling complex issues, the key to success lies in effective collaboration and teamwork, with all members working together towards a common goal. This correction and verification were accomplished through the cooperation and effort of the entire development team. In particular, the collaboration at each stage—investigation, analysis, correction, and verification—was crucial to the success of the process. During the investigative phase, there were many challenges. For example, reproducing memory leaks was difficult in the local environment, and it was necessary to re-validate in a verification environment similar to the actual environment. It also took a lot of time and effort to reproduce memory leaks and identify the cause by making prolonged calls to the external API. However, by overcoming these challenges, we were ultimately able to resolve the problem, leading to a strong sense of accomplishment and success. Through this article, we have shared practical approaches and valuable lessons learned to enhance system performance and maintain long-term stability. We hope this information will be helpful to developers facing similar challenges, providing them with insights to address such issues effectively. That's all for now!
アバター
Introduction I'm Wu from the Global Development Group. I usually work as a project manager for web and portal projects. I recently started going to the boxing gym again. I want to work hard on muscle training and dieting! We introduced a heat map tool called Clarity from Microsoft to the website we are developing, so I'd like to talk about it. Background The global expansion of the mobility service KINTO is introduced on the Global KINTO Web , faces the challenges such as short page visit durations and high bounce rates among users. While Google Analytics allows us to check metrics like scroll depth and click-through rates, it doesn’t provide insights into user behavior or what captures their interests. Therefore, we decided to implement an analysis tool that allows us to monitor user behavior and easily identify issues. Reasons for Choosing Microsoft Clarity As mentioned earlier, Global KINTO Web is currently a relatively small website and is not a service site. Considering the cost-effectiveness, we needed a heatmap tool that was as affordable and easy to implement as possible. We evaluated popular tools such as User Heat, Mieruka Heatmap, Mouseflow, and User Insight. However, there were several reasons why we ultimately chose Clarity. First, it is provided by Microsoft, a company already integrated within KINTO Technologies. Secondly, it is entirely free. Additionally, Clarity allows us to grant permissions to team members, enabling collaborative management. The simple setup process and the minimal engineering workload required for implementation were also critical factors in our decision. Comparison table of popular tools. Tools Features Implementation method Price Microsoft Clarity ・Instant Heatmap: Shows where users clicked and how far they scrolled ・Session recording available (←very useful) ・Recording ・Google Analytics integration Provided by Clarity, embed HTML tags into the website Free User Heat ・Mouse Flow Heatmap ・Scroll Heatmap ・Click Heatmap ・Attention Heatmap Provided by User Heat, embed HTML tags into the website Free Mieruka Heatmap ・Three Heatmap functions ・Ad analysis feature ・Event segmentation feature ・A/B testing feature ・IP exclusion feature ・Customer Experience Improvement Chart ・Free Plan: 3,000 PV/month ・Paid Plan: Offers options like AB testing, etc. mouseflow ・Basically includes the above features, plus robust funnel setup and conversion user analysis ・Recording feature ・Form analysis feature (View details like input time, submission count, drop-off rate) Embed Mouseflow tracking code into the website Starter plan (11,000yen/month) to Enterprise plan What is Microsoft Clarity? Released on October 29, 2020, Microsoft Clarity is a free heatmap tool provided by Microsoft. According to the official website, it is a user behavior analytics tool that allows you to interpret how users interact with your website using features such as session replays and heatmaps. Microsoft Clarity Setup Create a new Project in Clarity. Paste Clarity’s tracking code into the header element of your pages. Integrate with Google Analytics. Dashboard The Dashboard provides a clear overview of your site’s status with unique metrics such as Dead Clicks, Quick Backs, Rage Clicks, and Excessive Scrolling. Dead Clicks Dead Clicks refer to instances where a user clicks or taps on an element on the page, but no response is detected. Dead Clicks You can see exactly where users clicked. It’s also easy to understand because user movements are recorded in videos. In the case of Global KINTO Web , panels introducing each service are frequently clicked, which suggests that users are seeking more detailed information. Quick Back Quick Back refers to when a user quickly returns to the previous page after viewing a page. This can happen when users quickly determine that the page is not what they were looking for, or when they accidentally click on a link. It helps identify parts of your website where navigation might be less intuitive or where users are more likely to make accidental clicks. Quick Back Rage Clicks Rage Clicks refer to when a user repeatedly clicks or taps on the same area multiple times. Rage Clicks Global KINTO Web, there were several users who were repeatedly clicking on a collection of links due to slow internet speeds. Upon investigation, it was found that this issue occurred specifically for users on the same operation system, leading to further device testing. Excessive Scrolling Excessive Scrolling refers to when users scroll through a page more than expected. This metric helps identify the percentage of users who are not thoroughly reading the content on a page. Excessive scrolling Heatmap Click Heatmap You can see how many times users clicked on which parts of the page. The left menu shows the ranking of the most clicked parts. Click maps Scroll Heatmap Scroll Heatmap shows how far users scroll down the page. The red areas indicate the most viewed sections, with the colors gradually changing from orange to green to blue, representing decreasing levels of engagement. Scroll maps Click Area Heatmap The Click Area Heatmap functions similarly to the Click Heatmap but allows you to see which extensive areas of the page are being clicked on. This helps determine whether the content placed on the page is being viewed. Area maps Recording User behavior is recorded in real-time. You can review the mouse cursor’s position, page scrolling, page transitions, and click actions through the video. Additionally, information about the user’s device, location, number of clicks, pages viewed, and the final page visited can be accessed from the left-hand menu. The ability to view the entire sequence of user actions in a realistic video format might be Clarity’s most compelling feature. Recordings overview Conclusion The Global KINTO Web iis still in development and has room for improvement. After deciding to implement a heatmap tool, we were able to release it in just about two weeks (0.5 person-months), thanks to the quality of Clarity and the ease of its implementation. While we are not yet fully utilizing all of its features, we plan to leverage this tool moving forward to provide an even better user experience.
アバター
A.K 自己紹介 my route開発GのAです。ラトビア出身です。 前職はスタートアップで、フルスタックで幅広く活動していました。 所属チームの体制は? 自分を含めて6人になります。 KTCへ入社したときの第一印象?ギャップはあった? 大手企業のグループですが、自分のチームは雰囲気が和やかで意外と働きやすいかと。 興味がある技術があれば基本的に勉強会があるのは本当にいいと思いました。 現場の雰囲気はどんな感じ? 意外と外国籍の方が多いけど、みんな技術レベル高いし、話しやすいと思います。 ブログを書くことになってどう思った? 得意じゃないですね。 M.Oさんからの質問:スマートホーム化されてるAさん、アレクサによく聞く質問はなんですか? そうですね、出かける前に「今日の天気は」は全体聞きます。 あと、Spotifyが紐づいてるので「この曲何」とか、「〇〇を流して」とかもほぼ毎日使ってます。 豆知識ですが、アレクサはTTSスピーカーとして使えるので、カスタムメッセージを流すのは好きです。 一番役に立ってるのは、簡単なものですが7時〜8時の間、5分ごとに時間+メッセージを流すこと。「もう7時35分だよ!出る気あるのおい! 」みたいな。 S.D 自己紹介 プロデュースG所属の出口です。 前職ではカーナビや地図データ関連の仕事をしていました。自然言語処理や機械学習も関わっていました。 所属チームの体制は? 自分を含めて全5名のグループです。各メンバーは個々で異なる案件に関わっています。 KTCへ入社したときの第一印象?ギャップはあった? 面談時から社内の雰囲気は説明いただけてたので、大きなギャップはありませんでした。 現場の雰囲気はどんな感じ? メインのやり取りはSlackで頻繁に行いつつも、対面でのやり取りも頻繁に行っていて、コミュニケーションがとりやすい環境だなと感じました。 ブログを書くことになってどう思った? 社外発信できる機会を設けているのは良いなと思いました。 過去事例やテックブログにも目を通すきっかけとなり、社内メンバーの情報を色々と得ることができました。 A.Kさんからの質問: 今まで集めたガジェットの中、一番役に立つのがどれだと思いますか? 1つに絞り切れないので、いくつかあげさせてください! Raspberry Pi:気軽にIoTにも挑戦できて実際に物を作り込む体験ができる素晴らしい製品!この価格でUbuntu(GUIあり)がきちんと動くのは驚き。 insta360 flow:この性能のジンバルをこのコストで楽しめるのはありがたい!被写体追跡も便利! みてねGPS:小さいお子さんには持たせたい。スマホはNGという場所でも持ち込めるのが良いポイントだと思う。バッテリーの持ちも良いのがいい。 K.N 自己紹介 分析Gのデータエンジニアリングチーム所属の西です。 所属チームの体制は? 分析Gはデータサイエンスチーム、データエンジニアリングチーム、データプロデュースチームの3チーム体制です。 KTCへ入社したときの第一印象?ギャップはあった? 社内勉強会が多いです! 現場の雰囲気はどんな感じ? 毎朝の朝会で、進捗や課題、相談事などを共有しています。 東京、名古屋、大阪の3拠点、在宅と出社の混合勤務体制のため、必要に応じてSlackのハドルで画面共有しながら会話します。 ブログを書くことになってどう思った? 過去記事など参考にしたので、他の入社社員のことを知れるきっかけになるなと思いました。 S.Dさんからの質問:今思えば入社時にこんな情報があればもっと良かった、こんな仕組みがあったら良かったと思う事があれば教えてください。 入社してビジネスモデルのオリエンテーションなど多く受講しましたが、3ヶ月後ぐらいに何らかの復習のような機会があると、より知識の定着があるのかなと思いました。 W ![W avatar](/assets/blog/authors/numami/maymember/4.png =200x) 自己紹介 人事グループ組織人事チームに所属している渡邉です。 人材会社での営業マネージャーやスタートアップで人事をしていました。 所属チームの体制は? 人事グループには組織人事チーム、採用チーム、労務総務チームがあります。 グループ全体で13名在籍しています。 KTCへ入社したときの第一印象?ギャップはあった? ギャップは特にありませんでした。 TOYOTAグループなので社内統制がしっかりしているとは思っていましたが、想定通りカッチリしていました。とはいえ、十分な自由度は確保できていると思います。 面談当時から良くも悪くも様々な社内の課題を聞いていたので、その点もギャップはありませんでした。 現場の雰囲気はどんな感じ? 皆さん、それぞれ前向きに業務に取り組んでいる印象です。 入社後1ヶ月は部長やMGR陣と面談をさせてもらったのですが、皆さん快く受けていただいたので安心してスタートすることが出来ました。 ブログを書くことになってどう思った? 社内外への発信に力を入れていて素晴らしいなと思いました。 大手傘下だと、社外発信に関して統制が聞いているのかな?と思ったのですが、その辺りはベンチャーっぽく自由度が高いと感じています。 K.Nさんからの質問:KTCでどんなことにチャレンジしたいですか? みんなが一丸となって進んでいけるような環境を創ることにチャレンジしたいです K 自己紹介 IT/IS部に所属してます。前職ではSier企業でMSインフラ、.Net系開発、情シス系運用業務等の仕事をしてました。 所属チームの体制は? IT/IS部にはAsset-Paltfor、Corporate-Engineering、Tech-Service、EnterpriseTechnologyの4チームで構成されてます。 私はその中のCorporate-Engineeringチームで、主に業務課題や要望をシステム導入/更改/改善でアプローチする業務に携わってます。 KTCへ入社したときの第一印象?ギャップはあった? ギャップは有りませんでした。 IT/IS部に所属する全員が「自分の業務タスクがどう価値を提供されるか?」を考えており、統制されている点ですばらしいと思ったのが第1印象です。 現場の雰囲気はどんな感じ? 普段は室町オフィスで業務してます。それぞれが気軽に相談しあい、自分事のように考えてくれる雰囲気です。 定期的にリーダー、マネージャー、部長との1on1ミーテイングで率直な意見、感想、要望、悩みなどを会話してます。そのため、1on1以外でも声をかけやすい雰囲気であると思います。 ブログを書くことになってどう思った? このブログ以外でも外界への発信を積極的に行っているので、シンプルに良い施策だなと思いました。 Wさんからの質問:最近行った面白かった場所(旅行先など)あれば教えてください。 最近引っ越しをしまして、遊びに来た大学時代の友人と銭湯へ行きました。昭和的な外観と雰囲気で風情があり、非日常感を得ることができました。いままで銭湯に対してあまり興味はありませんでしたが、リフレッシュする面白い場所として印象に残ってます。 JK ![JK avatar](/assets/blog/authors/numami/maymember/6.png =200x) 自己紹介 Toyota Woven City Payment Solution開発G所属の金です。 所属チームの体制は? 自分を含めて全6人のグループです。 実際Woven側の作業を行なってまして、チームにはKTC以外にWovenのメンバーがいます。 フロントエンド側の作業からバックエンド側、インフラ周りまで幅広い作業を行なってます。 KTCへ入社したときの第一印象?ギャップはあった? 様々な内容をオリエンテーション時に聞けて良かったと思います。 現場の雰囲気はどんな感じ? 基本的には週に1回スプリント計画をを立ち上げて目標通りの作業を行なってます。 TechtalkやDocumentReadingなどの時間を定期的に持ってます。 リモート作業が多いためSlack, Meet などを活用しチームメンバーとコミュニケーションしてます。 ブログを書くことになってどう思った? 記事を読んでくださる方に少しでも有効な情報を伝えたいと思いました。 Kさんからの質問:仕事をするのに一番大事にしていることは何か教えてください! どこでも同じだとは思いますが、人とのコミュニケーションが一番大事かと思います。特に開発側では機能要件などでコミュニケーションミスがあったりすると全然違うものが作られたりもするので(w) 他もう一つは継続です。継続することによって自分、自分の作業はもちろんチームまでより完成度が上がると思います。 M ![M avatar](/assets/blog/authors/numami/maymember/7.png =200x) 自己紹介 データ連携プラットフォームT所属のMです。 所属チームの体制は? プロダクトをメンテナンスしているのは私を含め2名でやっています。担当するプロダクトが多くのシステムと連携しているのでいろんな人とやり取りをすることが多いです。 KTCへ入社したときの第一印象?ギャップはあった? 勉強会や新しいツール, サービスの導入が積極的でびっくりしました。 現場の雰囲気はどんな感じ? 落ち着いた雰囲気で必要があれば会話をする感じです。 ブログを書くことになってどう思った? 特に何も思わなかったです。 JKさんからの質問:入社後作業のOnBoardingやCatchupはどのような形で行われたでしょうか。その際に感じた良かったところがあったら教えてください! まず最初に1on1でチーム体制、携わる予定のプロダクトの目的・立ち位置について説明いただきました!そのあと資料ベースでのマシンおよび開発環境のセットアップをしました!ここまでは普通のOnBoardingかなと思います!そのあとに業務理解のためにKINTO 新車の契約までの一連の流れをハンズオンでやりました!ハンズオンの資料が丁寧に作成されていて、車を借りる流れを理解できたのは良かったです! D ![D avatar](/assets/blog/authors/numami/maymember/8.png =200x) 自己紹介 my route開発GのDです。 所属チームの体制は? 自分を含めて6名です。 KTCへ入社したときの第一印象?ギャップはあった? 何かを発信する機会が多い会社だなと感じました。想像していたよりもゆるい雰囲気だなと思いました。 現場の雰囲気はどんな感じ? 個人で粛々と作業を進めることが多いです。チームメンバーは優しい方ばかりです。 ブログを書くことになってどう思った? 社外の方に読まれると思うと、とても緊張しました。 Mさんからの質問 :お気に入りのランチは見つけましたか?あれば教えてください! ビルの1Fに入っているインド料理のお店です。 M.O 自己紹介 モバイル開発G所属の大沼です。前職では音声プラットフォームVoicyのAndroidエンジニアとして従事しておりました。バックエンド(Go)やフロントエンド(Angular/TypeScript)、iOSアプリなども開発していました。 所属チームの体制は? my routeのAndroid開発チームは、私を含めて4名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 事前に日本国外出身のエンジニアが多いと聞いていましたが、予想以上に多かったです。社内勉強会が豊富で、学びやアウトプットの機会がたくさんあります。 現場の雰囲気はどんな感じ? 社内で各自の担当領域がしっかりと明確化していて業務やコーディングに集中できます。また、業務で得た知見をすぐに共有し合える環境だと思ってます。 ブログを書くことになってどう思った? 技術のことをアウトプットすることは好きです。 Dさんからの質問:もしあれば、入社してから最も困ったことを教えて下さい 在宅ルールに準拠しOutlookでの在宅の予定を追加することです。   
アバター
UI Guidelines
Good morning, good afternoon or good evening! This post is brought to you by Az from the Global Development UIUX Team who loves to take apart machines and take pictures while sipping delicious tea. UI Guidelines What are UI Guidelines in the first place? What kind of people use them? Who can become happy by using them? Let’s explore these questions step by step. Differences between Brand Guidelines and UI Guidelines Let me explain about the general differences between Brand Guidelines and UI Guidelines. KINTO does not have them publicly available, but we do have Brand Guidelines available. What are Brand Guidelines? They outline key branding rules to follow, including: The brand's philosophy and values Brand name usage and writing conventions Approved colors and imagery Required user experience elements *this image is for illustrative purposes Based on these rules, designers think about expression methods and designs. Reference: What are Brand Guidelines? What are UI Guidelines? UI guidelines provide clear, practical examples of valid design elements. Colors and shapes used for buttons, text, and so on The screen layout ratio How to use icons and images Here, you'll typically find components and specifications ready to be applied directly, both in design and implementation. What happens when you implement from a requirement? For example, imagine that below are the conditions that make a button "KINTO-like": It uses fixed colors It's rectangular It has an easy-to-read label It can be identified as a button Do you have an idea in mind? Now, let's say we get the below results: All conditions are met, but there are parts that are different from the button we may have been expecting. First row: there are no rounded corners on the four sides Second row, left: the aspect ratio of the margin in the button is wrong Third row, left: it has shadows that are not used with the others If it was created under the same parameters, why don't all the details match up? The root of the issue lies in the lack of a common understanding When the same team members work together consistently, there's a strong likelihood they can collaborate effectively. However, in reality, both teams and its members are often subject to change. When team members experience getting feedbacks like "This is different from what we expected from you...'" after completing a task, the need to redo the work can lead to significant losses in both time and motivation. In the UI Guidelines, the main button is defined with precise specifications, such as rounded corners set to 30 px, top and bottom margins at 10 px, and a width of 1/12 of the screen. This ensures consistent output with no deviation. Concept and usage of UI Guidelines Designing according to this format will make the work easier for both the designer and the front-end team. Let me explain with some common, real-world examples. You can follow the guidelines without a designer No need to stress over minor details—the styles, including text size, are standardized with fixed settings such as unavoidable style presets. Often, improper sizing and margins lead to unstable quality. However, if you follow the guidelines for setting margins, the screen layout will remain well-organized and visually appealing, even without designer adjustments. Minimizes screen size issues A common issue with design files is handling screen size pixels. With the guidelines, ratios and breakpoints are predefined, ensuring there are no discrepancies. Many elements can be created using "standard specifications" Input form layout Input forms are a typical example of content with similar fields where additions and reordering are frequent. We’ve seen several changes in recent projects, but since the designs followed the guidelines, we were able to modify and implement them in parallel without issues. Message delivery Result screens and error screens often contain a large number of text elements and combinations. Since the layout for icons and text is fixed for each status, there was no need to prepare multiple patterns—only exceptions required special treatment. Fewer probles for everyone! Consistent output is achievable, regardless of differences in experience and skills. The guidelines serve two major purposes: reducing the need for verbal and written communication and maintaining a shared understanding across teams. I plan to keep improving the system to ensure we can continue resolving challenges and we can say "If you run into issues, just check the guideline and your problem will be solved!"
アバター
Hello, this is HOKA from the Human Resources Group. (I have also written an article in the past called Let's Create a Human Resources Group - Behind the Scenes of an Organization that Grew to 300 Employees in Three Years , so please take a look at that article as well.) On March 28, 2024, just three days before the end of 2023, 40 members of KINTO Technologies' Development Support Division from Osaka, Nagoya, and Tokyo gathered at the Google office in Shibuya, Tokyo to participate in the 10X Innovation Culture Program. Here is a report on the event. What is 10X Innovation Culture Program ? The 10X Innovation Culture Program is a leadership program designed to create an organizational environment that fosters innovation. It was launched by Google Japan in September 2023 . The program consists of three key elements: online training, assessment tools, and a solution packages. Through the online training, participants learn about the "Think 10X" concept. Assessment tools help participants understand their current position and identify issues. The solution packages provide strategies to solve those issues. This program allows participants to naturally integrate innovative ideas and knowledge into their own organizations. How it started Awache from our DBRE, is one of the management team members of Corporate Culture and Innovation Subcommittee , at Jagu'e'r and has shown a strong interest in the 10X Innovation Culture Program. When the program was launched, Awatchi organized a project to gather some volunteers from within the company to experience a light version of the program at the Google office. I participated, and that's how it all began. It was so enjoyable and I learned so much that when I shared it at the morning meeting the next day, everyone was enthusiastic about the idea. The manager suggested, "Let’s do this with the entire team!” and the division head immediately approved, saying, "If it's just for the Development Support Division, I can authorize it myself!" With this excitement, before we knew it, the event was quickly organized. Even before getting approval from the president or vice president, we had already decided to go ahead with this plan. I appreciate our company’s culture and sense of speed. From then on, we began a process of trial and error to see how we could conduct this on a large scale with over 40 people in the entire Development Support Division (lol). The road to implementation At first, we were thinking of conducting it just with our own team members, but that would make it difficult to expand to divisions other than the Development Support Division. To achieve the "10X Innovation Culture," we must fully understand it ourselves and be able to speak about it with confidence. Therefore, this time we decided to hold a 10X Culture Program at a Google office run by Google employees. The goal was to learn how to run the program effectively and to train the "facilitators" who could conduct it within our company in the future. When we carried out a survey within the Development Support Division to find members interested in becoming a facilitators, 17 members responded that they wanted to take on this role. Those from non-HR members were included, regardless of occupation or gender. (They participated in the 10X Culture Program with the intention of becoming facilitators.) Once the overall lineup was decided, two people from Google, Awatchi, and HOKA took the lead in planning the content. Drawing on the experience I gained from taking the course in October 2023, we decided to watch the videos and complete the assessments in advance so we could concentrate more on the discussions. Preparatory meeting online Watch 6 videos Take an assessment 10X innovation culture program at Google office Understanding trends in the Development Support Division from assessment results Conduct two discussions We held a preparatory meeting (March 20th) Awatchi also started preparations for the first preparatory meeting. However, the assessment tool provided didn't work as expected! Awatchi solved it with brute force. There were no assessments available in English! So we requested an English translation from an in-house specialist. There were no English videos either! So we used YouTube's translation tool. Various issues arose, and we received help from people both inside and outside the company. 24% of KINTO Technologies' employees are foreign nationals. This was a moment that made me realize once again the importance of being able to speak English. Awatchi acted as the facilitator for the preparatory meeting. We watched the 10X Innovation Culture Program videos one by one and then answered the assessments, repeating the process. At the end of the preparatory meeting, the results were shared on the spot via Looker Studio, an assessment tool. Participants were able to see trends within the Development Support Division overall and within each group, which made them more enthusiastic. ![AssessmentResults](/assets/blog/authors/hoka/20240611/assessment_result.png =750x) On the day (March 28th) Finally, the day arrived on March 28th. A total of 40 people from Tokyo, Osaka, and Nagoya gathered at the Google office. The event was held at the Google office, which is usually hard to get into, so I felt like a total tourist (lol). ![GatheringAtTheGoogleOffice](/assets/blog/authors/hoka/20240611/arriving.jpg =750x) The facilitators on the day were Rika and Kota from Google. At the opening, they explained what the "10X" in the 10X Innovation Culture Program stands for, using examples from Google. ![ScenesFromTheEvent](/assets/blog/authors/hoka/20240611/state.jpg =750x) Everyone listened attentively and took notes. And finally, the discussion began. To make the most of the limited time available, participants were divided into groups of around five to discuss "intrinsic motivation" and "risk-taking," areas which had shown potential for improvement in the preparatory assessment results. In the "intrinsic motivation" section, we discussed points such as "what is needed to approach daily work with passion" and "how can this be achieved within the company?" On the other hand, in the "risk-taking" section, participants exchanged opinions on topics such as "how to lower the psychological hurdles when taking on new challenges" and "how to create a culture that tolerates failure." The facilitator's role here was to lead each group. In this culture session discussion, it was important to ensure everyone had a chance to speak and to keep the discussions broad without focusing too much on individual points. The agreement for conducting the workshop, presented by Google, included the following points: Basic premises See it as an opportunity to learn Accept that mistakes are normal Notes Be aware of the impact your words have on those around you Interpret all opinions as being given in good faith. Don't share what others have said outside the group Let's Enjoy Google Culture! Let's call each other by nicknames These were very important points for making group work smooth and lively. Here is the actual discussion scene: ![Discussion1](/assets/blog/authors/hoka/20240611/discussion1.jpg =750x) ![Discussion2](/assets/blog/authors/hoka/20240611/discussion2.jpg =750x) ![Discussion3](/assets/blog/authors/hoka/20240611/discussion3.jpg =750x) ![Discussion4](/assets/blog/authors/hoka/20240611/discussion4.jpg =750x) ![Discussion5](/assets/blog/authors/hoka/20240611/discussion5.png =750x) ![Discussion6](/assets/blog/authors/hoka/20240611/discussion6.jpg =750x) The session was very exciting from start to finish, and at the same time, we were able to learn about our own challenges and understand what we could do to improve them. Here are the actual survey results. ![SurveyResult1](/assets/blog/authors/hoka/20240611/survey1.png =750x) ![SurveyResults2](/assets/blog/authors/hoka/20240611/survey2.png =750x) ![SurveyResults3](/assets/blog/authors/hoka/20240611/survey3.png =750x) We also received some nice comments from the two of the Google staffs! Rika Thank you for your hard work! This excitement was only possible thanks to the advance preparations made by everyone here, so let me once again express my gratitude to you all! We were also energized by the enthusiastic participants at the workshop.💪 I believe that the way you are moving forward with cultural transformation at your company will influence other companies as well! Kota Thank you very much for this valuable opportunity! I was overwhelmed by everyone's enthusiasm. I hope this will be the catalyst for KINTO's cultural development to move to the next stage. I'm rooting for you! We hope to develop further initiatives from here, so please continue to support us.😃 Epilogue As it was the end of March, it coincided with the goal-setting period at KINTO Technologies. After participating in this program, we heard many people say things like, " That’s not 10X, is it? ." This made me realize that each and every one of us has begun to think more seriously than ever before about what we want our organization to be. We also decided to introduce Google’s famous "20% rule" in the Development Support Division. Previously, there was a hesitation to adopt this program due to the impression that "only Google can do this." However, experiencing this program firsthand changed our mindset to "maybe we can do it too." Moreover, it has been decided that another event will be held by the Development Support Division, three months from now, at the end of June (which is coming up soon). This time we will run it ourselves. We are also preparing to introduce the program in other divisions. The role of facilitators will be crucial. How was it? If you feel that your company culture has challenges or if you want to improve your company, I highly recommend checking out the 10X Innovation Culture Program . You will surely gain valuable insights. Announcement Google Cloud Next Tokyo '24 Speaker confirmed👏👏 On August 2, 2024, our company's Development Support Division General Manager, Kishi, and Awatchi, who is promoting 10X within the company, will be speaking at Google Cloud Next Tokyo '24 to introduce the 10X Innovation Culture Program’s experience workshop. They will share our honest thoughts about how we felt through this experience, so please drop by if you have time.
アバター
Hello. I'm @hoshino from the DBRE team. In the DBRE (Database Reliability Engineering) team, our cross-functional efforts are dedicated to addressing challenges such as resolving database-related issues and developing platforms that effectively balance governance with agility within our organization. DBRE is a relatively new concept, so very few companies have dedicated organizations to address it. Even among those that do, there is often a focus on different aspects and varied approaches. This makes DBRE an exceptionally captivating field, constantly evolving and developing. For more information on the background of the DBRE team and its role at KINTO Technologies, please see our Tech Blog article, The need for DBRE in KTC . In this article, I will introduce the improvements the DBRE team experienced after integrating PR-Agent into our repositories. I will also explain how adjusting the prompts allows PR-Agent to review non-code documents, such as tech blogs. I hope this information is helpful. What is PR-Agent? PR-Agent is an open source software (OSS) developed by Codium AI, designed to streamline the software development process and improve code quality through automation. Its main goal is to automate the initial review of Pull Requests (PR) and reduce the amount of time developers spend on code reviews. This automation also provides quick feedback, which can accelerate the development process. Another feature that stands out from other tools is the wide range of language models available. PR-Agent has multiple functions (commands), and developers can select which functions to apply to each PR. The main functions are as follows: Review: Evaluates the quality of the code and identifies issues Describe: Summarizes the changes made in the Pull Request and automatically generates an overview Improve: Suggests improvements for the added or modified code in the Pull Requests Ask: Allows developers to interact with the AI in a comment format on the Pull Requests, addressing questions or concerns about the PR. For more details, please refer to the official documentation . Why we integrated PR-Agent The DBRE team had been working on a Proof of Concept (PoC) for a schema review system that utilizes AI. During the process, we evaluated various tools that offer review functionalities based on the following criteria: Input criteria: Ability to review database schemas based on the KIC’s Database Schema Design Guidelines Ability to customize inputs to the LMM to enhance response accuracy (e.g., integrating chains or custom functions) Output Criteria: To output review results to GitHub, we evaluated whether the following conditions could be met based on the outputs from the LLM: Ability to trigger reviews via PRs Ability to comment on PRs Ability to use AI-generated outputs to comment on the code (schema information) in PRs Ability to suggest corrections at the code level Despite our thorough investigation, we couldn’t find a tool that fully met our input requirements. However, during our evaluation, we decided to experiment with one of the AI review tools used internally in DBRE team, leading to the adoption of PR-Agent. The main reasons for choosing PR-Agent among the tools we surveyed, are as follows: Open source software (OSS) Possible to implement it while keeping costs down Supports various language models It supports a variety of language models, and you can select the appropriate language model to suit your needs. Ease of implementation and customization PR-Agent was relatively easy to implement and offered flexible settings and customization options, allowing us to optimize it for our specific requirements and workflows. For this project, we used Amazon Bedrock. The reasons for using it are as follows: Since KTC mainly uses AWS, we decided to try Bedrock first because it allows for quick and seamless integration. Compared to OpenAI's GPT-4, using Claude3 Sonnet through Bedrock reduced costs to about one-tenth. For these reasons, we integrated PR-Agent into the DBRE team's repository. Customizations implemented during PR-Agent integration Primarily, we followed the steps outlined in the official documentation for the integration. In this article, we’ll detail the specific customizations we made. Using Amazon Bedrock Claude3 We utilized the Amazon Bedrock Claude3-sonnet language model. Although the official documentation recommends using access key authentication, we opted for ARN-based authentication to comply with our internal security policies. - name: Input AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN_PR_REVIEW }} aws-region: ${{ secrets.AWS_REGION_PR_REVIEW }} Manage prompts in GitHub Wiki Since the DBRE team runs multiple repositories, it was necessary to centralize prompts references. After integrating PR-Agent, we also wanted team members to easily edit and fine-tune prompts. That’s when we considered using GitHub Wiki. GitHub Wiki tracks changes and makes it easy for anyone to edit. So we thought that by using it, team members would be able to easily change the prompt. In PR-Agent, you can set extra instructions for each function such as describe through the extra_instructions field in GitHub Actions. ( Official documentation ) #Here are excerpts from the configuration.toml [pr_reviewer] # /review # extra_instructions = "" # Add extra instructions here [pr_description] # /describe # extra_instructions = "" [pr_code_suggestions] # /improve # extra_instructions = "" Therefore, we customized the setup to dynamically add extra instructions (prompts) listed in the GitHub Wiki through variables in the GitHub Actions where PR-Agent is configured. Here are the configuration steps: First, generate a token using any GitHub account and clone the Wiki repository using GitHub Actions. - name: Checkout the Wiki repository uses: actions/checkout@v4 with: ref: main # Specify any branch (GitHub defaults is master) repository: {repo}/{path}.wiki path: wiki token: ${{ secrets.GITHUB_TOKEN_Foobar }} Next, set the information from the Wiki as environment variables. Read the contents of the file and set the prompts as environment variables. - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" Finally, configure the action steps for the PR-Agent. Read the content of each prompt from the environment variables. - name: PR Agent action step id: Pragent uses: Codium-ai/pr-agent@main env: # model settings CONFIG.MODEL: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.MODEL_TURBO: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.FALLBACK_MODEL: bedrock/anthropic.claude-v2:1 LITELLM.DROP_PARAMS: true GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} AWS.BEDROCK_REGION: us-west-2 # PR_AGENT settings (/review) PR_REVIEWER.extra_instructions: | ${{env.REVIEW_PROMPT}} # PR_DESCRIPTION settings (/describe) PR_DESCRIPTION.extra_instructions: | ${{env.DESCRIBE_PROMPT}} # PR_CODE_SUGGESTIONS settings (/improve) PR_CODE_SUGGESTIONS.extra_instructions: | ${{env.IMPROVE_PROMPT}} By following the steps outlined above, you can pass the prompts listed on the Wiki to PR-Agent and execute them. What we did to expand review targets to include tech blogs Our company’s tech blogs are managed in a Git repository, which led to the idea of using PR-Agent to review blog articles like code. Typically, PR-Agent is a tool specialized for code reviews. The Describe and Review functions worked somewhat when we tested it on blog articles. Still, the Improve function only returned "No code suggestions found for PR," even after adjusting the prompts (extra_instructions).(This behavior likely occurred because PR-Agent is designed primarily for code review.) To address this, we tested whether customizing the System prompt for the Improve function would enable it to review blog articles. After customization, we received responses from the AI, so we also decided to proceed with customizing the system prompts. System prompt refers to a prompt that is passed separately from the user prompt when invoking LLM. It also includes specific instructions on the items to be output and the format. The extra_instructions that I explained earlier are part of the system prompt, and it appears that if the user provides additional instructions in PR-Agent, those instructions are incorporated into the system prompt. # Here are the excerpts from the system prompt for Improve [pr_code_suggestions_prompt] system="""You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. omission {%- if extra_instructions %} Extra instructions from the user, that should be taken into account with high priority: ====== {{ extra_instructions }} # Add the content specified in the extra_instructions. ====== {%- endif %} omission PR-Agent allows you to edit system prompts from GitHub Actions, just like extra_instructions. By customizing the existing system prompts, we expanded the review capabilities to include not only code but also text. Below are some examples of our customizations: First, we modified the instructions specific to the code so they could be used to review tech blogs. System prompt before customization You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. # Japanese translation # あなたは PR-Reviewer で、Pull Request (PR) のコードを改善する方法を提案することに特化した言語モデルです。 # あなたのタスクは、PR diffで提示された新しいコードを改善するために、有意義で実行可能なコード提案を提供することです。 System prompt after customization You are a reviewer for an IT company's tech blog. Your role is to review the contents of .md files in terms of the following. Please review each item listed as a checkpoint and identify any issues. # Japanese translation # あなたはIT企業の技術ブログのレビュアーです。 # あなたの役割は、.mdファイルの内容を以下の観点からレビューすることです。 # チェックポイントとして示されている各項目を確認し、問題があれば指摘してください。 Next, we will modify the section with specific instructions so that you can review the tech blog. Changing the instructions regarding the output would affect the program, so we have customized it so that tech blogs can be reviewed by simply replacing code review instructions with text. System prompt before customization Specific instructions for generating code suggestions: - Provide up to {{ num_code_suggestions }} code suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new code in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR code. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat code already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the code, use backticks (`) instead of single quote ('). - Take into account that you are reviewing a PR code diff, and that the entire codebase is not available for you as context. Hence, avoid suggestions that might conflict with unseen parts of the codebase. System prompt after customization Specific instructions for generating text suggestions: - Provide up to {{ num_code_suggestions }} text suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new text in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR text. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat text already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the text, use backticks (`) instead of single quote ('). After that, add a new Wiki for system prompts, following the steps in "Managing prompts in a Wiki" explained earlier. - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" + set_env_var_from_file "IMPROVE_SYSTEM_PROMPT" "./wiki/pr-agent-improve-system-prompt.md" - name: PR Agent action step omission + PR_CODE_SUGGESTIONS_PROMPT.system: | + ${{env.IMPROVE_SYSTEM_PROMPT}} By following the steps outlined above, we customized PR-Agent’s Improve function, which typically specializes in code review, to support the review of blog articles. However, it’s important to note that the responses may not always be 100% as expected, even after modifying the system prompt. This is also true when using the Improve function for program code. Results of installing PR-Agent Implementing PR-Agent has brought the following benefits: Improved review accuracy It highlights issues we often overlook, improving the accuracy of our code reviews. It allows for the review of past closed PRs, providing opportunities to reflect on older code. Reviewing past PRs helps us continually enhance the quality and integrity of our codebase. Reduced burden of creating pull requests (PRs) The pull request summary feature makes creating pull requests easier. Reviewers can quickly see the summary, improving review efficiency and shortening merge times. Improved engineering skills Keeping up with rapid technological advances while managing daily duties can be challenging. The AI’s suggestions have been very effective in learning best practices. Tech Blog Reviews Implementing PR-Agent to our tech blog has reduced the burden of reviews. Although it's not perfect, it checks your articles for spelling mistakes, grammar issues, and consistency of content and logic, helping us find errors that are easy to overlook. Below is an example of a review of an actual tech blog ( Event Report DBRE Summit 2023 ). ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_describe_blog.png =800x) Summary of thePull Request (PR) for the tech blog by PR-Agent (Describe) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_01.png =800x) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_02.png =800x) Review of the Pull Request (PR) for the tech blog by PR-Agent (Review) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_improve_blog.png =800x) Proposed changes to the tech blog by PR-Agent (Improve) It is also important to note that a human being must make the final decision for the following reasons: The PR-Agent’s review results for the exact same Pull Requests (PR) can vary each time, and the accuracy of the responses can be inconsistent. PR-Agent reviews may generate irrelevant or completely off-target feedback Conclusion In this article, we introduced how the implementation and customization of PR-Agent have improved work efficiency. While complete review automation is not yet possible, through configuration and customization, PR-Agent plays a supportive role in enhancing the productivity of our development teams. We aim to continue using PR-Agent to improve efficiency and productivity further.
アバター
Introduction Hello! I'm Hasegawa , an Android engineer at KINTO Technologies! I usually work on developing an app called my route . Please check out the other articles written by members of my route's Android Team! Potential Bug Triggers in Android Development Due to Regional Preferences SwiftUI in Compose Multiplatform of KMP In this article, I will introduce how to get OG information in Kotlin (Android) and how to deal with character codes in the process. To be explained in this article What is OGP? How to get OGP in Kotlin The reason why text in the information obtained by OGP is corrupted How to deal with corrupted text What is OGP? OGP stands for "Open Graph Protocol" and is an HTML element that correctly shows the title and image of a web page when sharing it with other services. Web pages configured with OGP have meta tags that represent this information. The following is a meta tag that excerpts part of it. Services that want to get OG information can read information from these meta tags. <meta property="og:title" content="page title" /> <meta property="og:description" content="page description" /> <meta property="og:image" content=" thumbnail image URL" /> How to get OGP in Kotlin This time, I will use OkHttp for communication and Jsoup for HTTP parsing. First, use OkHttp to access the web page of the URL where you want to get OG information. I will omit error handling since it varies depending on the requirements. val client = OkHttpClient.Builder().build() val request = Request.Builder().apply { url("URL for wanted OG information") }.build() client.newCall(request).enqueue( object : okhttp3.Callback { override fun onFailure(call: okhttp3.Call, e: java.io.IOException) {} override fun onResponse(call: okhttp3.Call, response: okhttp3.Response) { parseOgTag(response.body) } }, ) Then parse the contents using Jsoup. private fun parseOgTag(body: ResponseBody?): Map<String, String> { val html = body?.string() ?: "" val doc = Jsoup.parse(html) val ogTags = mutableMapOf<String, String>() val metaTags = doc.select("meta[property^=og:]") for (tag in metaTags) { val property = tag.attr("property") val content = tag.attr("content") val matchResult = Regex("og:(.*)").find(property) val ogType = matchResult?.groupValues?.getOrNull(1) if (ogType != null && !content.isNullOrBlank()) { ogTags[ogType] = content } } return ogTags } Now ogTags has the necessary OG information. The reason why text in the information obtained by OGP is corrupted I think that I can get the OG information of most web pages correctly so far. However, for some web pages, corrupted text may occur. Here, I will explain the cause. This time, I called string() as shown below. val html = response.body?.string() ?: "" This function selects the character code in the following order of precedence: BOM (Byte Order Mark) information Response header charset UTF-8 unless specified in 1 and 2 More information can be found in the OkHttp repository comments . In other words, what if there is no BOM information, no response header charset, and a web page encoded in a non-UTF-8 format such as Shift_JIS? ... Text corruption occurs. Because it decodes with the default UTF-8. So what do we do? In the next section, I will explain how to respond. How to deal with corrupted text I found the cause of the corrupted text in the previous section. In fact, the character code may be specified in the HTML in the web page as follows. If there is no BOM information and the response header charset is not specified, this information could be used. <meta charset="UTF-8"> <!-- HTML5 --> <meta http-equiv="content-type" content="text/html; charset=Shift_JIS"> <!-- before HTML5 --> However, there is a contradiction that HTML must be parsed according to the character code in order to read the specified meta tag. Or so you might think. For example, UTF-8 and Shift_JIS are compatible in the range of ASCII characters, so it is not a problem to decode with UTF-8 once. (This method may parse twice. If you check the byte array of the meta tag beforehand, you may be able to determine the character code before parsing, but this time I focused on the code comprehensibility.) So, you can write code like the following. /** * Get the Jsoup Document from the response body * If the response body charset is not UTF-8, parse the charset again */ private fun getDocument(body: ResponseBody?): Document { val byte = body?.Bytes() ?: byteArrayOf() // If charset is specified in ResponseHeader, it is decoded with that charset val headerCharset = body?.contentType()?.charset() val html = String(byte, headerCharset ?: Charsets.UTF_8) val doc = Jsoup.parse(html) // If headerCharset is specified, the charset should parse correctly // return as is. If (headerCharset ! = null) { return doc } // Get the charset from the meta tag in the HTML. // If this charset is not present, the character code is unknown and the UTF-8 parsed doc is returned. val charsetName = extractCharsetFromMetaTag(html) ?: return doc val metaCharset = try { Charset.forName(charsetName) } catch (e: IllegalCharsetNameException) { Timer.w(e) return doc } // If the charset specified in the meta tag and UTF-8 are different, parse again with the charset specified in the meta tag // Parsing is a relatively heavy process, so don't double it. return if (metaCharset != Charsets.UTF_8) { Jsoup.parse(String(byte, metaCharset)) } else { doc } } /** * Get the charset string from the HTML meta tag * * Less than HTTP5 -> meta[http-equiv=content-type] * HTTP5 or higher -> meta [charset] * * @return charset character string ex) "UTF-8", "shift_JIS" * Null if @return charset is not found */ private fun extractCharsetFromMetaTag(html: String): String? { val doc = Jsoup.parse(html) val metaTags = doc.select("meta[http-equiv=content-type], meta [charset]") for (metaTag in metaTags) { if (metaTag.hasAttr("charset")) { return metaTag.attr("charset") } val content = metaTag.attr("content") if (content.contains("charset=")) { return content.substingAfter("charset=").split(";")[0].trim() } } return null } Then, let's change the function that creates the Jsoup Document as follows using the process that we just created. - val html = body?.String() ?: "" - val doc = Jsoup.parse(html) + val doc = getDocument(body) Conclusion Thank you for reading this far. Most web pages use UTF-8 character code, and even if you use a different character code, most of the time the charset is specified in the BOM or response header. Therefore, I do not think that this kind of problem will occur very often. However, if you find such a site, it may be difficult to understand the cause and how to fix it. I hope this article will help you.
アバター
Hello. My name is Zume, and I am the group manager of the Quality Assurance (QA) Group at KINTO Technologies. Although I have a long and extensive history in QA, I haven’t been particularly focused on sharing my experience or knowledge until now. However, I thought it would be a good idea to take some time to gather my thoughts, but before I knew it, 2022 came to an end with the ringing of the bells on New Year's Eve. It's tough to find time for myself when I’m usually busy with work. This has always been my excuse for not making time for personal projects. If I keep saying "I'll do it next month" a few more times, I'll soon find myself welcoming a new year. About test management This time, I would like to introduce the benefits of the test management tools used by my group and the journey we took to implement them. To all the QA engineers reading this article, how are you managing your test cases? Some of you may already be using some kind of paid test management tool. Generally, Excel or Spreadsheets tend to be used for managing test cases and test executions. However, when using Excel or Spreadsheets for test management, we encountered several challenges: Challenges in the test process - Issues - ⇒ Concerns (potential issues) Test case structuring often becomes personalized by the test designers, and case classifications and formats vary. ⇒When the designer changes, the handover process becomes complicated ⇒Due to the lack of standardized format, it takes time to understand cases when the project changes. To review the cases, you need to open and check the contents of files each time. ⇒It is difficult to share documents and know-how within the team. Stakeholders (other than QA) have a hard time getting an overview of the test content and results. ⇒The QA side needs to prepare reports for stakeholders. For regression testing, a new file needs to be created for each test cycle. ⇒It becomes difficult to track which cases were reused. It is difficult to follow the change history or update history of test cases. ⇒Maintenance, including case updates, takes a lot of time (plus Excel is not suitable for simultaneous online editing by multiple users) Since the test execution results are entered manually, the exact execution time is unknown. ⇒It is challenging to pinpoint the exact time when defects occur Test cases and bug reports are not linked ⇒It becomes difficult to compile statistics such as the defect rate for each function (manual compilation is possible but very tedious). And so on. To address these challenges, we considered implementing tools that support a series of test activities such as test requirements management, test case creation, test execution, and test reporting. In fact, we never considered using Excel or Spreadsheets from the beginning. This is because we knew from our experience that once Excel-based operations become ingrained, it takes a lot of time to shift away from them. Evaluation of tools to be implemented Initially, the tools we considered were: TestLink : An open source web-based test management tool. Free of charge. TestRail : A web-based test management tool. Paid. Zephyr for JIRA : A JIRA plugin. Paid. (Renamed to Zephyr Squad in 2021[^1]) [^1]: Zephyr for Jira is now Zephyr Squad , SmartBear Software., 2021 One of the reasons we considered TestLink was my experience with it at my previous workplace. Another advantage is that it can be tested right away by installing Docker even in a local environment. In fact, I once used a Mac for both testing and running TestLink. However, I joined KINTO Technologies in March 2020 (when it was still KINTO Co., Ltd.), and the project for which we planned to introduce the tool was scheduled to be released two months later, in May. To make things more challenging, the first state of emergency due to the spread of the new COVID-19 was declared in April during this period. In such a nerve-wracking situation, which tool did we choose as the most appropriate option? It was Zephyr for JIRA . The biggest advantage was that it could be implemented quickly as an add-on for JIRA, which was already being used within the company. Additionally, considering the unexpected shift to remote work during the COVID-19 pandemic, it was convenient since it could be accessed from outside the company. Although it was a paid tool, we decided to start using it with the idea that if we could get through the May release, we would reassess its continued use. Looking back at my notes from that time: Since it's a JIRA plugin, I thought I could change the language settings, but it seems only parts of it support Japanese. Zephyr's reports are based on scenarios, and there is no reporting function for individual test steps. etc. [^2] [^2]: ※It seems that requests for step-by-step reports have been made by users as early as 2013, according to the Atlassian community. However, in the comment , TestFLO was recommended as an alternative solution. These notes reflect our trial and error process. It brings back all the memories. Although it was easy to implement, it is still essential for users to be familiar with the system and possess the necessary skills. In that sense, I am still grateful to the team members who flexibly navigated that chaotic period with me. Using Zephyr It's been almost three years(!)Even though it’s an old story, here are my impressions of using Zephyr for JIRA. As it is a JIRA plugin, test cases can be created in the same way as normal issues by selecting the desired issue type. Case items include steps, expected values, results, status, comments, and file attachments, making it convenient to leave screen captures as evidence for each step. On the other hand, it took quite a long time to load the plugin itself. The problem was that it took a few seconds each time we changed screens. A similar question for help was posted on the Atlassian community, so it may be a Zephyr-specific issue . And now to TestLink Now, let's talk about test management after we somehow managed to meet the release schedule and handed off the project in May 2020. We reconsidered the cost aspect as well. Assuming the tool is linked to JIRA and the number of users is around 10 to 20 people, the prices as of 2020 were as follows: Zephyr for JIRA: 11-100 Users ¥510/user/month ⇒ ¥10,200/month for 20 users TestRail: 1-20 Users $32/user/month ⇒ $640 (approx. ¥83,200)/month for 20 users The prices as of 2023 are as follows: Zephyr Squad: 11-100 Users ¥570/user/month ⇒ ¥11,400/month for 20 users TestRail: 1-20 Users $37/user/month ⇒ $740 (approx. ¥96,200)/month for 20 users The fee structure has changed slightly since then, and the prices have gone up a bit. *All prices are calculated at 130 yen to the dollar At first glance, Zephyr seems like a good deal, but since it is a plugin for JIRA, you will actually need to have the same number of licenses as you do for JIRA. In that regard, since not everyone in the Development Division will use it and only QA members will, we want to avoid increasing costs as the organization expands. Still, TestRail is quite expensive. Considering the cost, there is no better option than the free TestLink. Although the UI of TestLink is not the best (it's open source so I won't complain), as a test management tool, it can at least resolve the issues mentioned above as follows. Testing process challenges and their solutions Challenges in the test process When the tool is implemented Concerns when using the tool 1. Test case structuring often becomes personalized by the test designers, and case classifications and formats vary. By describing test suites, test cases, test steps, etc. in a certain format with appropriate detail, a certain degree of granularity is achieved. Easy handover and case deciphering! 2. To review the cases, you need to open and check the contents of files each time. High visibility of implementation items and easy tracing to test requirements make it easy to understand coverage Documents can be easily shared within and outside the team! 3. It is difficult for stakeholders to get an overview of the test contents and results. With real-time tracking of test progress and results viewable in reports, there’s no need for QA to create reports! 4. For regression testing, a new file needs to be created for each test cycle. It can be used on a test suite basis It's easy to identify reusable components! 5. It is difficult to record the change history and update history of test cases. In addition to adding and modifying test cases, the history can be recorded. Case maintenance is easier! 6. Since the test execution results are entered manually, the exact execution time is unknown. Bug reports, execution times, and execution record are accurately logged. You can narrow down the implementation time period! 7. Test cases and bug reports are not linked Easier tracking of requirements/releases, such as test progress rate and defect occurrence rate It's easy to compile data such as the defect occurrence rate for each function! So, we decided to introduce TestLink from June 2020 onwards. Well, I'm sure my teammates will get annoyed if I say it's easy, but the truth is that while the tool isn’t omnipotent, it's a lot easier than using data files like Excel. Postscript Even though it's free, there are still infrastructure costs to run it. We are using an AWS instance for TestLink, which costs several tens of thousands of yen per year. It has been almost three years since we started using it, and so far we have been able to operate it without any major issues. In this article, I explained how we implemented TestLink as a test management tool in the QA group. In future posts, I hope to discuss how TestLink is used in actual projects, its integration with JIRA, and more.
アバター
Introduction I am Kanaya, a member of the KINTO FACTORY project, a service that allows you to renovate and upgrade your car. In this article, I will introduce our efforts to improve Deploy Traceability to Multiple Environments Utilizing GitHub and JIRA. Last time, I also wrote an article related to Remote Mob Programming in the Payments Team. Background and Challenges I joined the KINTO FACTORY project from the latter half of the development process. I was assigned as the frontend team leader for the e-commerce site project, and during my time in charge, I noticed the following issues: GitHub Issues, JIRA, and Excel are used for task management, making progress difficult to manage Difficult to track which tasks are deployed in which environment Troublesome to generate release notes when deploying to a test environment ![Excel WBS and Gantt chart example](/assets/blog/authors/kanaya/traceability_excel_gantt.png =480x) Excel WBS and Gantt chart example First, managing progress was difficult. At the time I joined the project, there were three types of task management tools: GitHub Issues, JIRA, and Excel WBS and Gantt charts, all of which were in use. This lack of centralized control of necessary information made it difficult to manage schedules and tasks. Second, it was difficult to track which tasks are deployed in which environment. During development, there were two target environments for deployment (a development environment and a test environment), making it challenging to know which environment the task under development had already been deployed to. Lastly, Troublesome to generate release notes when deploying to a test environment. Since the test environment was used for testing not only by us engineers, but also by the QA team responsible for quality assurance, we needed to communicate when and which content was deployed to it. We used to create release notes as a communication method, but writing them each time took about 5 minutes and was quite stressful. Our goal was to improve deployment traceability to address these issues. At least issue 2 and 3 (environment-specific deployment management issues, release note generation issues) are expected to be resolved. In addition, we aim to resolve issue 1 (difficulty in managing progress) by changing the way of work, as described later. Policy to Enhance Deployment Traceability First of all, traceability is described in DevOps technology: Version Control | DevOps Capabilities as follows. Among these, it is required that differences between multiple environments are either avoided or quickly identified once they occur. Note that version control of all dependencies can be managed in package.json, package-lock.json of npm for the frontend, so I'll skip that here. No matter which environment is chosen, it is essential to quickly and accurately determine the versions of all dependencies used to create the environment. Additionally, the two versions of the environment should be compared to understand the changes between them. As a policy to improve traceability to manage which tasks are deployed to which environments, we did the following: Manage all tasks and deployments with JIRA Rely on automatic generation of release notes Manage all tasks and deployments with JIRA JIRA has a feature to view development information for an issue . Since we know the status of code, reviews, builds, and deployments, we decided to consolidate all development information into JIRA. To integrate JIRA and GitHub, the following steps are required: Set up for JIRA and GitHub integration Include the JIRA ticket number in the branch name to connect the JIRA ticket with the GitHub pull request Set up the environment during deployment with GitHub Actions The second step was the part left to the work of each engineer. In asking each engineer to include the JIRA ticket number, we have decided to eliminate the use of GitHub Issues and Excel, and unify the use of JIRA. By unifying to JIRA, each engineer can manage tasks more easily, and those who manage progress can also use JIRA's roadmaps for centralized management. JIRA roadmap example For the third step, by passing environment parameter to deploy, the deployment status passed to environment will also be linked to JIRA. For reference, here is some of the deployment code by GitHub Actions we are using. In the environment parameter, $${ inputs.env }} is further passed, so that a key for each environment is created. Since $${ inputs.env } contains the environment name of the deployment destination, the deployment destination will be integrated with JIRA. DeployToECS: needs: [createTagName, ecr-image-check] if: ${{ needs.createTagName.outputs.TAG_NAME != '' && needs.ecr-image-check.outputs.output1 != '' }} runs-on: ubuntu-latest environment: ${{ inputs.env }}-factory-frontend steps: - Specific processing As a result, the development status is managed by JIRA roadmaps and tickets, and each ticket can be viewed to manage whether it is under review, merged but not deployed, and to what environment it has been deployed. Status listed on each JIRA ticket Visualizing the deployment status across all tickets, not just each ticket, is also possible. It is useful to see when each ticket was deployed and to which environment. Visualization of deployment status to each environment :::message GitHub also has a project function that can achieve this to some extent, but in light of the roadmap feature and integration with tools used by the QA team , we are unified with JIRA. ::: Rely on automatic generation of release notes For automatic generation of release notes, we decided to use GitHub's automatically generated release notes feature. The automatic generation of release notes is a feature that lists the titles and links of pull requests for the release note portion of GitHub's release feature . It can be better handled by setting a few rules. Here is an introduction. Define the category of release content The pull requests listed in the release notes are not categorized by default, making them difficult to view. Categorizing the pull requests helps keep release notes organized and easy to view. Categories are represented by labels. This time, I wanted to specifically display major changes and bug fixes as categories in the release notes, so I created 'enhancement' and 'bug' labels to represent each. You can also generate a list of pull request titles by category by creating a file .github/release.yml in the target repository and writing the following. changelog: categories: - title: Major Changes labels: - 'enhancement' - title: Bug Fixes labels: - 'bug' - title: Others labels: - '*' An image of the generated release notes is shown below. Pull requests labeled 'enhancement' and 'bug' are now classified as 'Major Changes' and 'Bug Fixes,' respectively All pull requests without 'enhancement' and 'bug' labels are classified as 'others.' Category sorting and title correction at the time of pull request review It is possible to generate release notes and then manually sort them, but once they are generated, it is difficult to remember and sort them. Therefore, at the time of the pull request review, we assign labels that correspond to the categories. We also check the titles to ensure they are appropriate for the content correction. To avoid forgetting to apply labels, others labels are given to refactoring, etc. This ensures that we know the review and category sorting are complete. Results Through the above efforts, we were able to successfully resolve the issues we were facing. In particular, the JIRA roadmaps have been referenced by other teams and are now used throughout the KINTO FACTORY project. Previously, GitHub Issues, JIRA, and Excel were used for task management, making progress difficult to manage. Now, centralized and managed in JIRA tickets and roadmaps. Previously, it was difficult to track which tasks are deployed in which environment. Now, deployment status of each environment is now visible in tickets. Previously, creating release notes when deploying to a test environment was troublesome. Now, work that used to take 2-3 minutes has drastically decreased to 10 seconds. Future Development By deploying to production environment, JIRA can measure two of the DevOps Four Keys in terms of speed. Our team will collaborate to identify the current status and target metrics for deployment frequency and change lead time for continuous improvement. Deployment frequency to production environment Lead time from merge to deploy to production environment The KINTO FACTORY project is looking for team members who will work together to achieve service growth. If you are interested in this article or KINTO FACTORY, check out the job listings below! [KINTO FACTORY Full Stack Engineer] KINTO FACTORY Development Project Team, Tokyo [KINTO FACTORY Backend Engineer] KINTO FACTORY Development Project Team, Tokyo [KINTO FACTORY Frontend Engineer] KINTO FACTORY Development Project Team, Tokyo
アバター
Introduction Hello, I am Keyuno and I am part of the KINTO FACTORY front end development team. As part of our KINTO FACTORY service, we are launching a dedicated magazine using Strapi , a headless content management system (CMS). *More details will be shared in an upcoming article, so please stay tuned! :::message What is Strapi? A Headless CMS with high front-end scalability Low implementation costs with default APIs for content retrieval As an open-source software (OSS), APIs can be added and expanded as needed. ::: In this article, I would like to explain how to add custom APIs to Strapi, which we implemented when introducing Strapi. This article covers the following two patterns of custom API implementation. :::message Custom API implementation patterns and use cases Implementing a new custom API We want to retrieve and return entries from multiple collectionType (content definitions) We want it to return the results of business logics that cannot be fully covered by the default API. Overriding the default API We aim to modify entry retrieval by replacing the auto-assigned postId with a custom UID. ::: Optimizing web page management is a constant challenge. I hope this article helps ease the burden for engineers, even if just a bit. Development Environment Details Strapi version : Strapi 4 node version : v20.11.0 Implementing a new custom API This section shows how to implement a new custom API. While this approach offers high flexibility because it can be implemented at the SQL level, overdoing it can make maintenance difficult, so use it wisely. 1. Create a router First, add the routes for the API endpoints you create. Under src/api , there is a directory for each collectionType. In the figure below, the routes directory is under post . Create a file under routes for defining custom-route. *According to the official documentation, there is a command npx strapi generat that prepares the necessary files (though I haven’t used it). In the created file, write the following code: export default { routes: [ { method: "GET", // Refers to the HTTP method. Please modify as needed to suit your purposes. path: "/posts/customapi/:value", // These are the endpoints for the APIs you will implement. handler: "post.customapi", // Specify the controller that this route refers to. } }; method Specify the HTTP method. Please modify as needed to suit the API you are creating. path Specify the endpoint for the custom API you are implementing. The sample endpoint, /:value indicates that the trailing value is received as the value variable. For example, if /posts/customapi/1 and /posts/customapi/2 are called, the value will store 1 and 2 respectively. handler Specify the controller (explained later) that the custom API you are implementing refers to. Specify the name of the function in the controller that you want to reference. 2. Implement the controller Implement the controller referenced by the routes implemented in step 1. Open the post.ts file located in the controllers directory , which is in the same level as the routes directory. In this file, add the handler ( customapi ) specified in the previous routes to the default controller (CoreController) as follows: Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); After change import { factories } from "@strapi/strapi"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { await this.validateQuery(ctx); const entity = await strapi.service("api::post.post").customapi(ctx); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); } catch (err) { ctx.body = err; } }, })); What’s changed Added a custom handler customapi() to the default controller Retrieved the result of executing the customapi () service that contains the business logic customapi() , as referenced in line 8. :::message In this section, the business logic is moved to the service layer, but it is also possible to implement the business logic in the controller (choose the layer based on reusability and readability). ::: For details on validateQuery(), sanitizeOutput(), transformResponse() , please refer to Strapi’s official documentation . 3. Implement the service Implement the service referenced by the controller implemented in step 2. Open the post.ts in the services directory , which is at the same level as the controllers directory. Add the method (customapi) specified in the previous controller to the default service (CoreService) as shown below. Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreService('api::post.post'); After change import { factories } from "@strapi/strapi"; export default factories.createCoreService("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { const queryParameter: { storeCode: string[]; userName: string } = ctx.query; const { parameterValue } = ctx.params; const sql = "/** Database to use, SQL according to purpose */"; const [allEntries] = await strapi.db.connection.raw(sql); return allEntries; } catch (err) { return err; } }, })); What’s changed Add the custom service customapi() to the default service Line 6: Retrieve the query parameter information Line 7: Obtain endpoint parameter information Line 10: Get the SQL execution results :::message You can use strapi.db.connection.raw(sql) to execute SQL directly , but strapi also provides other ways to obtain data. For other methods of obtaining data, please refer to the Official Documentation . ::: 4. Confirm operation With this, the implementation of the new custom API is complete. Please actually try calling the API and check that it works as expected. Overriding the default API In this section, I will show an example of how to override the default entry detail retrieval API to allow fetching with a custom parameter. [Entry detail retrieval API] [Before override] GET /{collectionType}/:postId(number) [After override] GET /{collectionType}/:contentId(string) 1. Create a router It is basically the same as when implementing a new custom API. Add the following code to the custom.ts under the routes directory: export default { routes: [ { method: "GET", path: "/posts/:contentId", handler: "post.findOne", } }; With this route addition, the endpoint that previously retrieved entry details using /posts/:postId(number) will now retrieve entry details using /posts/:contentId(string) (entry details can no longer be retrieved using /posts/:postId(number) ). 2. Implement the controller The implementation of the controller is basically the same as when implementing a new custom API. Modify the post.ts in the controllers directory, which is at the same level as the routes directory, as follows: Before change (initial state) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); After change import { factories } from "@strapi/strapi"; import getPopulateQueryValue from "../../utils/getPopulateQueryValue"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async findOne(ctx) { await this.validateQuery(ctx); const { contentId } = ctx.params; const { populate } = ctx.query; const entity = await strapi.query("api::post.post").findOne({ where: { contentID: contentId }, ...(populate && { populate: getPopulateQueryValue(populate), }), }); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); }, })); What’s changed Added a custom findOne() controller to the default controller In line 12, it extracts records where the contentID column matches contentId . Since .findOne() is used in line 11, the result will be a single object. :::message Lines 13-15 follow the process for applying the populate parameter provided by the default API. If you want to fetch videos or images from mediaLibrary, you must add populate , so please be aware. ::: In this section, the business logic is implemented in the controller rather than the service. 3. Confirm operation With this, the implementation to override the default API is complete. Please actually try calling the API and check that it works as expected. Conclusion This concludes the explanation of implementing custom API in Strapi. I think Strapi is a highly customizable and great tool. Therefore, I hope to continue sharing my knowledge, and I would be happy if you could share your insights as well. We also have other topics, such as: Automatically building applications when publishing Strapi articles. Embedding videos (e.g., mp4) in CKEditor. I will cover these topics in future articles. Thank you for reading.
アバター
Overview Hello, we're Mori, Maya S, and Flo from the Operations Enhancement Team at the Global Development Group. The Global Development Group organized an in-house Hackathon-style event called the "KINTO Global Innovation Days" which took place over six days from December 14th to 21st. During the first four days from December 14th to 19th, three seminars were held, followed by two days dedicated to actual development. This was the first time that such an event was held within KINTO Technologies. This article is the first in a series of articles on the event, sharing the journey leading up to it. How it started KINTO Technologies currently consists of about 300 members and has roughly doubled in size in about two years. Among them, the Global Development Group is also currently a large group of 60 members. As an organization, we are subdivided into teams of 5 to 10 members, each performing their tasks but communication across teams has always been a challenge. Even within the Global Development Group, it's common for people to struggle with matching faces to names. In addition, although we had planned and organized internal study sessions to improve communication and skills, they inevitably turned out to be a one-way knowledge sharing. We were looking for an opportunity for engineers to learn through hands-on activities. In July, several of our group members participated in a Hackathon at Toyota Motor North America (TMNA) , which made us think that hosting such an event within our group could address the above issues. So, we decided to start planning and proposing this event at the end of August. Objectives and Timing While hackathon events have a variety of benefits in general, our primary objective this time was to stimulate cross-team communication. We believe that by not leaning too much on the business side, a certain degree of freedom in thinking was gained. We also set a goal of holding the event by the end of 2022 at the latest. The reason was that a major project involving the entire group was set to be completed by November, making it difficult to anticipate tasks beyond the fourth quarter. Research and Content Review Since this was our first time organizing an event, we first researched hackathon cases around the world to consider what an actual event should be like. Maya S was in charge of this research. As various role models were studied, mainly from other companies' tech blogs and hackathon event sites, a pattern began to become apparent. By picking up the elements of the pattern and combining them with aspects that fit our organization and goals, we were able to put together the contents for our Innovation Days. Many examples of findings could be presented, but I will explain three of them below. Finding 1: Benefits As we prepared for the event, we felt the need to communicate the benefits of participating to the participants, stakeholders, and everyone involved. For example, the benefits to the organization include opportunities for gaining ideas for intellectual property, increasing member engagement, and discovering new strengths. As for the benefits to individuals, we emphasized that they can learn in various aspects by coming up with ideas that cannot be tackled in their daily work and by interacting with work processes and members they would not usually encounter. Findings 2: Content ideas Based on the above benefits, the seminars were incorporated as content. We learned that hackathons typically include talks by guest speakers, lectures, and workshops aligned with the event's theme and goals. For Innovation Days, we prepared a workshop on upstream processes which is not usually experienced, a communication workshop, and a workshop on the Toyota Way, given that it was a "Hackathon held by KINTO Technologies." Many people would think of novelty items when it comes to events hosted by IT companies. This time, we distributed stickers, hoodies and clear files to the participants and support members. We also borrowed ideas from various events, like setting up criteria and rules for judging final pitches and deliverables, allocating time for coding, icebreakers, and prizes. Note: After the event name was decided, the UIUX team in the Global Group designed the logo. Thanks to them, we ended up with fantastic novelty items. Appreciate it a lot!!!! Findings 3: Theme setting The last point we want to address is theme setting. Noting that many hackathons have narrowly focused themes and objectives set by organizers, and some even have sponsors for various themes, in our event, the managers decided a "Challenge Theme" and took on the role of "Challenge Owner" to sponsor and explain each theme to the participants. This approach allowed the manager to provide support and encouragement to the participants. Reference: Council Post: Four Tips For Running A Successful Hackathon Urban Mobility Hackathon Find & Organize Hackathons Worldwide - Mobile, Web & IoT Hackathon Guide Theme Review For the content of the themes, four managers (the Group Manager and three Assistant Managers) who will actually evaluate on the day of the event selected four themes. Theme 1-2 Theme 3-4 Encouraging members Since this was the first attempt within the company, it took about three months from the time from the start of planning to recruiting members, through research, content review, and theme selection. At the beginning of November, after finalizing the theme, we held a project briefing for all Global Development Group members, and began recruiting on November 8th. The official event name, "KINTO Global Innovation Days," was decided. There was a proposal to make participation mandatory for all participants, but we chose to respect autonomy and allowed volunteers to opt-in instead. Slack was used for recruiting. 🔻🔻 At the briefing, we received words of encouragement from our managers and told our participants that we had the support from our CEO and CIO. However, recruiting participants was initially challenging, so we focused on highlighting the benefits directly to the team members Flo was responsible for this. We decided to communicate the benefits when talking in person in the office and through DMs. This allows us to ask members who are unable to participate why and make improvements. First, we explained the experience and skills they would gain by participating in the event. We emphasized the opportunities to try programming languages they don't normally use, propose new tools, and suggest improvements that haven't been prioritized. We also appealed to a sense of ownership and investment, as proposals made during the event could be used to improve processes in Global Group (Theme 3), be commercialized as a new service (Theme 1, 2), or be considered for participation in other hackathon events. Among all, our top priority was creating a supportive environment. Although ideas are evaluated and rewarded, the competition is friendly. We also encouraged people who had never participated in such an event, felt they couldn't contribute because they weren't engineers, or thought they'd be of no use to participate because it's an event where they could experience things they normally would not. There are also things that we noticed in conversations. Since the event was held before Christmas, several people were planning to take consecutive holidays or return to their home countries. For this reason, we decided to move the event up a few days. We adjusted the schedule with the instructors of each workshop, and finally set the pre-event for December 14th to 19th, with Innovation Days on December 20th and 21st. This added at least two to three more team members who could participate. As a side note, since there were only three operating members plus one support member, it was convenient for us to have a weekend in the middle of the event. Hosting the event all week long would have been physically demanding. Grouping and Pre-work Thanks to the recruiting efforts, we gathered 30 participants. More than half of the Global Development Group participated, as the group manager, assistant managers, and we operational members were not eligible to participate. Participants came from various teams such as Business Development, PdM (Product Management), UIUX, Frontend, Backend, Testing, and DevOps. We allocated each team leader based on two conditions: 1) involving people who are not usually involved in the work, and 2) ensuring team leaders were separated to maintain a balance of power. We ended up with 5 people in each of the 6 teams. (The members were perfectly divided because we had a total of 30 participants😊) The team members were announced on November 18th, and were then given two weeks to review and submit the following information: Team name Theme of choice Team leader As we are the team that is the most used to interacting cross-functionally in the group, we had concerns about whether the participants would be able to communicate well with each other, or engage actively in the event. However, our worries were unnecessary. As they were participating voluntarily, each team was more proactive than expected, creating their own Slack channels and holding meetings, which gave us hope for future events! 🎉 Review of Preparation Since we started this project with completely no experience, either within the company or from previous jobs, we had to conduct extensive research and seek advice from various people during the preparation. In particular, the approval process took a long time, but involving the CIO and the president was one of our achievements, and a major factor that we believe will lead to future events. In addition, we were able to successfully distribute tasks by combining the strengths of each Operations Enhancement Team member, such as idea generation (including research), planning and reporting, and understanding the situation and inspiring team members, which enabled us to implement the project in a short period of about four months from conception. There were various challenges during the pre-event period and on the day of the event, which will be described in the next article. Conclusion By the way, the planning of KUDOS and this event emerged from our daily conversations within the Operations Enhancement Team. We place a high value on conversations and take pride in our ability to go from casual conversation— like suggesting solutions and sharing experiences —to planning, execution, and results.
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies, Mobile App Development Group. As a team leader of the iOS team, I have previously published articles on team building, which you may find interesting. Please feel free to check them out: Revitalizing Retrospectives Through Professional Facilitators 180-degree feedback: Highly recommended! Recently, I participated in [Probably the world’s fastest event: the "Agile Teams goal-setting Guidebook" ABD Reading Session] My three main objectives for attending this event were as follows: I wanted to experience Active Book Dialogue® (referred to as "ABD" from now on). I was interested in the book featured in the event, "Agile Teams goal-setting Guidebook" . I wanted to meet the author, Ikuo Odanaka. Among these, experiencing ABD for the first time was particularly valuable. I found this reading method incredibly insightful and would like to introduce ABD to more people through this article. Important Notice All individuals and materials mentioned in this article have been approved to be published by the event’s organizers and the respective individuals. About the event This event took place on Wednesday, July 10, 2024, and was held as an "ABD reading session with the author before the publication" of the "Agile Teams goal-setting Guidebook". The event was so popular that the 15 available slots were filled within the same day the registration page was open. I feel incredibly fortunate to have been able to participate. I’m especially grateful to Kin-chan from our Corporate IT Group, who introduced me to this event! About the book I won’t go into too much detail about the book’s content, as I encourage you to read it yourself. However, I’d like to share some insights Ikuo-san introduced during the opening. It seems that goal setting isn’t particularly favored in today’s society. However, if everyone sincerely engages with their goals and strives to achieve them, the world will become a better place. Therefore, creating good goals is extremely important. That said, while setting goals is crucial, finding ways to achieve them is even more important. This book dedicates roughly the first 20% to the process of goal setting, with the remainder focused on how to achieve those goals, incorporating elements of Agile methodology. Although the book doesn’t cover performance evaluations, which are often discussed alongside goal settings, it does include columns written by eight contributors. These columns nicely complement the content, so I highly recommend reading them! Ikuo-san's opening scene About Ikuo-san Although I have never met Ikuo-san before, I was familiar with him through the following LT sessions and articles: ”Keeper of the seven keys four keys and three more ” ”10 reasons Why it’s easy to work with an engineering manager like this!” ”To fulfill the pride of being a "manager." “5 essential books that supported the Ideal EM, Ikuo Odanaka” I found his insights on development productivity, engineering management, and his approach to reading, to be incredibly valuable. I’ve always wanted to meet him and have a conversation. Unfortunately, although I manage to exchange a brief greeting with him during the event, I didn’t have the chance to have a proper conversation. While this was disappointing, I hope there will be another opportunity in the future. About ABD The following is a quote from the official ABD website . What is ABD? Explanation by the developer, Sotaro Takenouchi: ABD is an entirely new reading method that allows both people who are not fond of reading and those who love books to read the books they want in a short period of time. Through the process of dividing the book, summarizing it, presenting and sharing the summaries, and engaging in discussions, participants can deeply understand what the author is trying to convey, leading to active insights and learning. Additionally, by combining the active reading experience of each participants through group reading and discussion, the learning deepens further, and there is potential for new relationships to be fostered. I sincerely hope that through ABD, everyone can take better steps in their reading, driven by their intrinsic motivation. The process Co-summarize Participants bring their own books or divide one book into sections. Each person reads their assigned section and creates a summary. Relay presentation Each participants presents their summary in a relay format. Dialogue Participants pose questions, discuss their impressions and thoughts, deepening their understanding. The appeal of ABD Short reading time ABD allows you to read a book in a short amount of time while gaining a deep understanding of the author's intensions and content. It’s perfect for those who tend to accumulate unread books. Summaries remain After an Active Book Dialogue® session, the summaries remain, making it easy to review and share the key points with others who haven’t read the book. High retention rate Since participants are mindful of presenting when they input and summarize information, followed by immediate output and discussion, the content sticks in memory more effectively. Deep insights and emergence Engaging in dialogue with diverse people, each bringing their own questions and impressions, leads to profound learning and the emergence of new ideas. Multifaceted personal growth ABD helps participants develop focus, summarization, presentation, communication, and dialogue skills, which are all crucial for leadership in today’s world. Creation of a common language When the same team members participate, they share the same level of knowledge, creating a common language. Community building With just one book, you can create a space for dialogue and connect with others, making it ideal for casual community building. Most importantly, it’s fun! The immediate sharing of the excitement and learning gained from reading enriches the experience and, most importantly, makes it enjoyable. Personally, I find the value in 1. Short reading time, 6. Creation of a common language, 7. Community building, and 8. Most importantly, it’s fun! to be exceptionally high. On the day The book was divided into 15 sections. This was the first time I had seen such a sight! lol The book was divided into sections Co-summarize (20 minutes) Each participants read their part and create a summary. We were given 20 minutes to read and summarize the book onto three A4 sheets, which was quite challenging. I was so pressed for time that I forgot to take any pictures. Relay Presentation (1 minute 30 seconds per person x 15 people) Each participant posted their summaries on the wall. The Summaries everyone prepared Then, each person presented their summary in 1 minute and 30 seconds. Everyone’s summaries and presentations were outstanding. This is the photo of me presenting. I was so nervous, and the time was so short that I can’t remember what I said at all! My presentation Dialogue (25 minutes) In this part, we picked three sections from the presentations, and divided into groups to discuss them further. I joined the group focused on "Becoming a team that can help each other." Group discussion Within the group were Scrum Masters and Engineering Managers, and we exchanged various opinions. One particularly memorable discussion was about how we should build teams where people can challenge themselves with what they love, whether it’s their forte (specialty) or something they struggle with (growth opportunity). What I learned from the book through ABD Up until now, I had never used "OKR" (Objectives and Key Results) as a method for goal management, but my understanding of OKR has deepened through this experience. I also learned how crucial it is for a team to set goals driven by intrinsic motivation when creating goals. What stood out to me was the importance of setting goals through discussions within the team, rather than using a top-down approach. Additionally, I was struck by the idea that what truly matters is the “achievement of goals,” not just the “completion of tasks.” The notion that “sometimes, you need the courage to abandon lower priority tasks” was a new perspective for me. Moreover, the breakdown of reasons why we might feel like we don’t have enough time to achieve our goals, such as genuinely not having enough time, being unsure if the time investment is worthwhile, or lacking the motivation, was something I had never considered before. While the idea of “genuinely not having enough time” is easy to grasp, the concepts of “not being sure if it’s worth the time" and "lacking motivation" were new to me, though they resonated with my own experience. The book also offered solutions to these challenges, so I would like to read the book and review it. Thoughts It was my first time experiencing ABD, and I found it both stimulating and very enjoyable. Since all the participants on the day were genuinely interested in book we discussed, the presentations and dialogues were highly constructive, and I learned a lot. I’m considering trying ABD at our company as well, by gathering team members who are interested. However, I also felt that the operational difficulty could be quite high for the following reasons: Facilitators need strong skills because the session must proceed within a limited time. Co-summarizing is challenging, which might lead to differences in the quality of summaries and presentations depending on the participants. Selecting the right book and gathering team members could be difficult. I’ve participated in book study groups several times before, but I found that they often pose challenges like the burden of continuity over a long period and the individual workload (depending on the format of the book study group). In contrast, ABD offers a great alternative by wrapping up the session in a short time, which helps to overcome those drawbacks. However, the trade-off might be a lower understanding of the book due to the shorter session time. I think it’s important to carefully select the book and have prior discussions with participants to determine the most suitable reading method.
アバター
はじめに こんにちは、6月入社のahomuでございます。 本記事では2024年6月と7月入社に入社された皆さまに入社後の感想などをテキストで回答いただきました。 KINTOテクノロジーズに興味をもってくださった皆さま、そして、今回参加くださった各位がいつか見返したとき有益なコンテンツになればと存じます! hosoya ![観葉植物の写真](/assets/blog/authors/ahomu/20241007/hosoya.jpg =300x) 自己紹介 hosoyaです。所属はIT/IS部です。社内情シスのヘルプデスク窓口対応をしています。 所属チームの体制は? チームは私も含めて5人です。私の所属チームの他に役割別に複数のチームがあり、問い合わせ内容によって他チームと連携を取って業務を行っています。 KTCへ入社したときの第一印象?ギャップはあった? 情シス内で役割毎にチームが別れ、しっかり連携が取れているところが印象的でした。1人か2人だけの情シスばかりにいたので、すごくしっかりしているなと感じました。 現場の雰囲気はどんな感じ? 静かで自分の業務に集中できる環境です。しかし周りに話しかけづらいという事はなく、業務の事でも雑談でも話しかけるとすぐに盛り上がるので、明るい雰囲気です。 ブログを書くことになってどう思った? 業務で関わりがないとなかなか他の方が普段どのような事をされているのか知る機会がないかと思いますので、このブログがそういった機会になればと思います。 他の方からの質問:1日の業務スケジュールを教えてください 回答:朝9:00に出社して夕方18:00の退社までヘルプデスクの問合せ対応をしている感じです。朝と夕方にチームの情報共有の打合せを行なっています。問合せの内容にもよりますが毎日定型の業務を行っている感じです。 my ![青い海と空、白い雲の写真](/assets/blog/authors/ahomu/20241007/my.jpg =300x) 自己紹介 データ分析部に所属しているmyです。現在、データサイエンティストとして活動しています。これまで、データサイエンティストや機械学習エンジニアとして、データに関わるさまざまな業務に携わってきました。 所属チームの体制は? マネージャーを含めて4名で構成されています。 KTCへ入社したときの第一印象?ギャップはあった? 良いギャップとして、オンボーディングが整っており、社内ドキュメントがしっかりしている点や、Slack上の活発なコミュニケーションが印象的でした。 現場の雰囲気はどんな感じ? 穏やかな雰囲気の中で、技術に関するディスカッションがしやすい環境です。 ブログを書くことになってどう思った? 情報を発信する機会をいただき、嬉しいです。 他の方からの質問:在宅勤務をする中で買ったら凄く良かったという物を教えてください! 回答:ハーマンミラーの椅子です。長時間でも快適に座ることができ、とても満足しています。 yi ![植木鉢から伸びる2本のサボテンの写真](/assets/blog/authors/ahomu/20241007/yi.jpg =300x) 自己紹介 プラットフォーム開発部QAGに所属しているyiです。QAをやっております。 所属チームの体制は? チームは10人で、現在は大きくフロントエンド、バックオフィス、アプリの3つのグループに分かれてそれぞれのプロジェクトに対応しています。 KTCへ入社したときの第一印象?ギャップはあった? 新しい会社だけれど、社内の仕組みはちゃんとしているという印象を受けました。入社前は、色々なことがもうちょっとカオスな状態なのかもと考えていましたが、思ったより落ち着いた感じでした。 現場の雰囲気はどんな感じ? 各々がお忙しい状況でも、チームやプロジェクトの方には質問すれば答えていただけますし、全体的に穏やかな雰囲気なので馴染みやすい環境だと思います。 ブログを書くことになってどう思った? こういったブログを書いた経験がないので、何を書いたらいいのか戸惑ったというのが正直なところです。 他の方からの質問:チームの雰囲気はどうでしょうか? 最近感じたチームの良い点があれば教えてください 回答:先ほども書きましたが全体的には穏やかな雰囲気で、KTCのQAとしては各々担当するプロジェクトのテストをパートナーさんと進めている感じです。複数プロジェクトを担当している方も多く、それぞれお忙しいですが、新人に限らず、お互い質問しあったりする雰囲気があるのは良いと思います。 ahomu ![斧をもった海鳥のイラスト](/assets/blog/authors/ahomu/ahomu.png =300x) 自己紹介 ahomuです。IT/IS部に所属しています。職務経歴としてはWebフロントエンドの開発経験が長めですが、現在は組織横軸のいろいろをやっています。 所属チームの体制は? じつは、詳細は入社してから考えようという話で入社しており、本稿を書いている時点で部付のソロ活動(社内フリーランス)です (。•̀ᴗ-)✧ KTCへ入社したときの第一印象?ギャップはあった? カジュアル面談や選考プロセスのなかで、現所属の部長や副社長が事業が置かれている状況や、組織の雰囲気について明け透けに話してくださっていたのでギャップらしいものは感じていません。強いて言うなら、大企業の傘下にあることからこれまでの経験(メガベンチャーやスタートアップ)と比べて社内統制が良い意味でカッチリしていて新鮮です。 現場の雰囲気はどんな感じ? ソロ活動とは言いつつ、さまざまな部署のマネージャーやメンバーの方とお話をさせていただく機会があります。各々の立場で事業を担っていらっしゃる責任を感じると同時に、突然話しかけてきた新参者にも快くお話いただけるので助かっています。 ブログを書くことになってどう思った? あ、そういえばこれは本当に意外でしたが、テックブログの寄稿が社内的にすごく活発で、特に誰彼が追い立てたりしなくてもコンスタントに情報発信できていて伸び代を感じます。 他の方からの質問:名古屋と東京の会社で違うと思う文化、雰囲気等あったら教えてください。 回答:名古屋は人数が20人程度とコンパクトなのと、それぞれで幅広に活躍している方が多く個性的?な気がします。KINTO事業とも距離感が割と近く、親会社とのやり取りをする人も多いかもしれません。最近は名古屋オフィス内の飲み会が不定期開催されるようになりました🍻 つづら ![海外都市の川と両岸の町並みをとらえた夕景の写真](/assets/blog/authors/ahomu/20241007/tsuzura.jpg =300x) 自己紹介 マーケティング企画部/編成グループ所属のデザイナーです! 所属チームの体制は? ディレクター9名、デザイナー4名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 部署やチームが細分化されているので、社員同士の交流はあまりないのかな?っていうのが第一印象でしたが、実際には他の部署のデザイナーさんともランチやプライベートの飲み会に行ったりして交流を持てていて情報共有などできてとても助かってます。 現場の雰囲気はどんな感じ? うちのチームでいうと、それぞれのプロジェクト毎に動いてるので全員とかかわりが濃いわけではないですが社内で会ったときは雑談したりでも仕事はしっかりしてメリハリがついてる印象です。 ブログを書くことになってどう思った? どきどきわくわく。 他の方からの質問:所属オフィス近くのおいしいランチを教えてください! 回答:室町オフィス所属なんですが、「 でですけ サイゴンキッチン 」さんがおすすめ!私はいつもハーフ & ハーフで、フォーとカレーを注文するんですがそれぞれ4種ずつくらい味のバリエーションがありどれも美味しいのでおすすめです。 上原 直希 ![目をつむったネコの横顔の写真](/assets/blog/authors/ahomu/20241007/uehara.png =300x) 自己紹介 上原と申します。プロジェクト推進部 KINTO FACTORY開発グループ所属です。バックエンドエンジニアとして働いています。前職では老舗ISPにてニュースメディアの開発をやっていました。好きなプログラミング言語はRust、好きなエディタはNeoVimです。 所属チームの体制は? バックエンドエンジニアでいくと6名で開発を行なっています。他にフロントエンジニアも含めると20人くらいになります。 KTCへ入社したときの第一印象?ギャップはあった? あまりオンボーディングも用意されず、いきなり現場に投入されるのかなと思いきや、意外とオンボーディングや1on1などが充実しており、おかげさまでスムーズに業務に入ることができました。社内のあらゆるところに新しいことに取り組もうという雰囲気があって、自分も良い刺激を受けています。 現場の雰囲気はどんな感じ? 和気藹々とした雰囲気かなと思います。自分はわからない部分があるとすぐ気になってしまうタイプなのですが、メンバの方には質問に対し嫌な顔せず答えていただけるので大変ありがたいです。これまでより開発に集中する時間が増え、エンジニアとしてプロダクトに向き合えるのは良い環境だなと思います。 ブログを書くことになってどう思った? 実は入社前にKINTOテクノロジーズのTech Blogの ある記事 に助けられた経験があり、今度は自分が記事を書く側になり大変光栄です。個人的にSlackやブログ等で見える形でアウトプットをしようという意識をしており、これからTech Blogで役立つ情報をどんどん発信できればと思っています。 他の方からの質問:行って一番よかったと思う旅行先を教えてください!よければ理由も一緒に~! 回答:新婚旅行で行った伊勢志摩ですかねー!名鉄が販売している切符「まわりゃんせ」が便利すぎて最高です。東京に住んでると手に入れにくいのですが、じゃらんで特急券なしのまわりゃんせを買うのがおすすめです。 梁 晉榮 ![カレーとポテトフライと翠(ジンソーダ)の缶の写真](/assets/blog/authors/ahomu/20241007/jin.jpg =300x) 自己紹介 台湾出身の梁晉榮です。所属はモバイル開発グループ、主にAndroidアプリの開発をやっています。 所属チームの体制は? 担当するプロダクトの開発チームでは、私を含めてAndroidエンジニアが6名います。 KTCへ入社したときの第一印象?ギャップはあった? 所属チームが活気を溢れていて、Androidエンジニアも多数在籍しており、勉強会などを通じて技術面の交流を幅広くやったことが、自分に対してとても良い刺激になりました。 現場の雰囲気はどんな感じ? 開発時期によっては忙しくなることが多く、結構スピード感のある開発チームだと感じました。それでも良いプロダクトを作りたいっていう気持ちがチーム全員にあるので、細かなコミュニケーションを惜しまずやっています。 ブログを書くことになってどう思った? 入社エントリーを書くのが初めてで、入社ばかりの時の気持ちを振り返って、これからKTCでどうやっていくのかを考えることができました。 他の方からの質問:最近気になっているスマホアプリあったりしますか? 回答:PayPayアプリですね。サービスの開始から何年も使っていきましたが、新しい機能が追加されている中で、どうやってアプリの品質を維持しながら開発していくのか、スーパーアプリとしての仕組みに凄く興味があります。 Dara Lim ![屋内に展示された車の写真](/assets/blog/authors/ahomu/20241007/daralim.jpg =300x) Toyota FJ25 Land Cruiser - Toyota Dealership in Bogota, Colombia 自己紹介 My name is Dara Lim. I belong to the KINTO Global Development Group in the Business Development Department. My title is Business Development Manager, but the work I do relates closely to working as a business analyst. In my previous job, I worked as a financial analyst and business analyst in the insurance industry. 所属チームの体制は? There are 3 members on my team and we work closely with the engineering team to develop software solutions for the global full service lease businesses. KTCへ入社したときの第一印象?ギャップはあった? I really appreciate the orientation/onboarding process and the 1-on-1 meetings. They helped me to smoothly transition into work. My team was also very supportive. 現場の雰囲気はどんな感じ? I really enjoy the Jimbocho office space and its surroundings. My team sits close to each other so we are able to have discussions readily. ブログを書くことになってどう思った? Actually, before I joined the company, I was helped by many articles on KINTO Technologies' Tech Blog, so I’m glad to write my initial experience on joining the company. 他の方からの質問:What is the best thing you have noticed since joining KTC? 回答:I have had the experience of traveling to Latin America to visit KINTO businesses in Peru, Brazil, and Colombia. These were very valuable experiences for me to understand the car leasing business, its profitability and best of all, to meet others fellow KINTO members. I think this is the best thing I’ve experienced since joining KTC. 谷 郁弥 ![ふくふくとしたネコのイラスト](/assets/blog/authors/ahomu/20241007/tani.jpg =300x) 自己紹介 KINTO ONE開発部 新車サブスク開発G、Osaka Tech Lab所属の谷です。フロントエンドエンジニアをやっています。制作系からサービス開発系まで幅広くフロントエンド開発をこなしてきました。 所属チームの体制は? チームは4人体制です。販売店や社内向けのツール群を少人数で開発しています。 KTCへ入社したときの第一印象?ギャップはあった? 入社前は、大企業とスタートアップの雰囲気が混ざったカオスな環境で、まだまだ業務環境の整備が行き届いていないだろうと勝手に想像していたのですが、蓋を開けてみると、オンボーディングが濃密で、業務量も調整しやすく、フルフレックスで柔軟に働け、残業代もきちんと反映され、福利厚生が手厚く、優しく親切な方が多い、といった感じで良いギャップだらけでした。 現場の雰囲気はどんな感じ? 分からないことは積極的に質問ができる心理的安全性が高い環境だと思います。勉強会への参加が推奨されているのも魅力的で、併せて、自分のチームの場合、技術選定の自由度が高く、リアーキテクトやリファクタも推奨されている為、総じてスキルアップしやすい環境だと感じています。 ブログを書くことになってどう思った? KINTOテクノロジーズのことが少しでも解像度高く伝わればと思い、目一杯キーボードを叩こうと思いました。 他の方からの質問:あなたが持っているお気に入りのものは何ですか?またその理由は何ですか? 回答:SONYのノイズキャンセリングヘッドホン(WH-1000XM5)です!これのおかげで、音に対して敏感な体質の自分でも、すぐにゾーンに入ることが出来る為、非常に重宝しています。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが登場するのでお楽しみに〜。 KINTO テクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは 採用情報 をご覧ください https://www.kinto-technologies.com/recruit/
アバター
はじめに こんにちは、4月入社のウエヤマです。 本記事では2024年4月に入社された皆さまの入社後の感想などをまとめました。 KINTOテクノロジーズに興味をもってくださった皆さま、そして、今回参加くださった各位がいつか見返したとき有益なコンテンツになればと存じます🌸 マツノ ![ゴルフ](/assets/blog/authors/K.ueyama/Newcomers/golf.jpg =250x) 自己紹介 皆さんはじめまして!2024年4月入社のマツノです! 所属はプラットフォーム開発部プラットフォームGのMSPチームになります。前職ではAWS上に構築されたシステムの保守・運用を担当していました。 所属チームの体制は? 自分が所属しているMSPチームは4人体制です。他チームより引き継いだ定型業務をメインに担当しています。 KTCへ入社したときの第一印象?ギャップはあった? 同期の方含めて、優秀そうな方が多いなぁという印象を受けました。あとはフランクな方が多いのは良い意味でギャップがありましたね。 現場の雰囲気はどんな感じ? 基本的に質問や相談はいつでもしやすい雰囲気です。あとは作業をする時は黙々と作業に集中して、雑談する時は和気あいあいといったメリハリのある感じですね。 ブログを書くことになってどう思った? 元々テックブログのことは知っていて、興味もあったのでちょうどいい機会だなと思いました! m ![海](/assets/blog/authors/K.ueyama/Newcomers/sea.jpg =250x) 自己紹介 クリエイティブ室所属のmです。前職ではSESのIT企業でUI/UXデザイナーをしていました。 所属チームの体制は? ディレクター・デザイナーの10名体制です。 KTCへ入社したときの第一印象?ギャップはあった? オフィスがとても綺麗で、無料のドリンクサーバーもあり快適だと思いました。 現場の雰囲気はどんな感じ? 年齢層は30代〜40代が多く、知識と経験が豊富な方々ばかりです。 オフィスは割と賑やかなことが多いです。 ブログを書くことになってどう思った? 自分の考えや知識を発信できる場があるのはいいことだなと思います! ラセル ![城](/assets/blog/authors/K.ueyama/Newcomers/castle.png =250x) 自己紹介 バングラデシュから来た2024年4月入社のラセルです。プラットフォーム開発部モバイルアプリ開発GのPrismチームのiOS担当してます。 所属チームの体制は? チームは、エンジニア、デザイナー、POを含めて約14名です。 KTCへ入社したときの第一印象?ギャップはあった? モビリティサービスに興味があります。KTCの、トヨタのモビリティサービスをリードするという使命には大変感銘を受けました。ギャップを感じた点はとくにございません。 現場の雰囲気はどんな感じ? 人々は親切で助けてくれます。最新の技術を使うことに障壁はありません。技術的な問題についても話しやすいです。 ブログを書くことになってどう思った? このコンテキストでブログを書くのは初めてですが、これは本当にクールで楽しいアイデアだと思います。 ウエヤマ ![パスタ](/assets/blog/authors/K.ueyama/Newcomers/pasta.jpg =250x) 自己紹介 業務システムGのウエヤマです。前職ではSIerでシステム開発をしていました。 所属チームの体制は? エンジニア7名です。 KTCへ入社したときの第一印象?ギャップはあった? 面談、面接時に同じチームメンバーの方と話ができていたので、ギャップはあまり感じてません。 現場の雰囲気はどんな感じ? 皆さんホント優しくて話かけやすい環境です。 ブログを書くことになってどう思った? 自己紹介記事をGitHub管理してプルリクを出す形式が驚きでした。 R ![猫と魚](/assets/blog/authors/K.ueyama/Newcomers/catfish.jpg =250x) 自己紹介 プラットフォーム開発部共通サービス開発Gの会員PFに所属のRです。フロントエンド6対バックエンド4くらいで開発をしております。 所属チームの体制は? PdM1名とエンジニア4名です。 KTCへ入社したときの第一印象?ギャップはあった? 複数のプロジェクトや会社内外のイベント等をいくつも掛け持ちしておられる優秀な方々を間近で見て、とても自由だなという印象を受けました。 入社前にKTC主催の勉強会に参加し、若手中心ではありますがKTCの雰囲気を一部事前に知っていたこともあり、ギャップを感じた点はとくにございません。 現場の雰囲気はどんな感じ? バックエンドは黙々と、フロントエンドは実装中の画面について意見・感想等を話して盛り上がることも時々あります。 ブログを書くことになってどう思った? 読む分には何も身構えることはありませんでしたが、いざ自分が書くとなると何を伝えるべきか分からず困ってしまいました。言語化能力や発信力を鍛えないといけませんね。 kasai ![ひよこのイラスト](/assets/blog/authors/K.ueyama/Newcomers/chickicon.png =250x) 自己紹介 プラットフォーム開発部プラットフォームグループSREチームのkasaiです。前職でもSREやってました。 所属チームの体制は? グループとしてはたくさんいますが、SREチームは2人体制です!チームについては後日ブログが公開されるのでそちらをお楽しみに! KTCへ入社したときの第一印象?ギャップはあった? 面接や面談等からしっかりとお話しさせていただき、認識合わせを行っていたため、ギャップは感じませんでした! 現場の雰囲気はどんな感じ? 和気藹々としています!!!!! ブログを書くことになってどう思った? ついに・・・この時が・・・来たっ!!!!!! https://blog.kinto-technologies.com/posts/2022-12-03-ktc_club_introduction/ さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが登場するのでお楽しみに🍻 KINTO テクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは 採用情報 をご覧ください https://www.kinto-technologies.com/recruit/
アバター
はじめに こんにちは。モバイル開発のヒロヤ (@___TRAsh) です。 弊社ではいくつもの内製開発プロダクトがありますが、多くのプロダクトでXcode Cloudを採用しています。 Xcode CloudはAppleが公式に提供しているCI/CDサービスで、iOSアプリのビルドやCD(TestFlightへのデプロイ)を自動化できます。 今回は、Xcode Cloudでプライベートリポジトリをライブラリとして取り込む方法についてあまり参考資料が無く、ビルドを通すのに苦労したので、調査した結果をこちらにまとめておきます。 対象 今回の内容はiOS環境のCI/CDの話になるので、ある程度iOS開発の知識がある方を対象としています。 環境 - Xcode 15.4 - SwiftPMでライブラリを管理してる - 参照しているライブラリにプライベートリポジトリがある - GitHub Actions + FastlaneでTestFlightへデプロイしてる やりたいこと TestFlightのデプロイをGitHub Actions + FastlaneからXcode Cloudに移行したいと考えています。 これをすることで、Fastlaneへの依存をなくすことができ、App申請までのフローに必要なツールを減らすことができます。 また、申請に必要な証明書の管理もXcode Cloud上で直接Apple Developerの証明書を参照してくれるので、証明書の管理も楽になります。 悩み ここまで利点しかないXcode Cloudですが、ライブラリとしてプライベートリポジトリを参照する際には、ユーザー認証が必要になります。Xcode Cloudではそういった認証設定が考慮されていないため、プライベートリポジトリをライブラリとして参照するには一工夫する必要があります。 そこでXcode Cloudから提供されている ci_scripts/ci_post_clone.sh を活用して認証の設定することで、プライベートリポジトリを参照できるようになります。 .netrcの設定 Xcodeには12.5の頃から .netrc を参照する機能が追加されています。 .netrc はユーザー名とパスワードを記述したファイルで、 ~/.netrc に配置することで、 git clone 時に認証情報を自動で入力してくれます。 また、今回はライブラリをプライベートリポジトリのGitHub Releaseで管理する方法をとっているので、 api.github.com も追加しています。 touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc ユーザー名とアクセストークンをXcode Cloudの環境変数にSecretで設定しておき、 ci_post_clone.sh で参照するようにしています。 追加のリポジトリにURL追加 App Store Connect内のXcode Cloud の設定にある 追加のリポジトリ にライブラリのリポジトリURLを追加します。 defaults delete で設定を削除する 上記の設定でプライベートリポジトリのライブラリを取得できる様になってもまだライブラリの依存関係を解決できず、以下の様なエラーに遭遇しました。 :::message alert Could not resolve package dependencies: a resolved file is required when automatic dependency resolution is disabled and should be placed at XX/XX/Package.resolved. Running resolver because the following dependencies were added: 'XXXX' ( https://github.com/~~/~~.git ) fatalError ::: このエラーはXcode Cloud上でSwiftPMが、Package.resolvedを参照せず、自動でパッケージのバージョンを解決しようとしていることが原因です。 それぞれ、Xcodeのdefaultsを削除するとうまくビルドが通る様になります。 defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution この2つの設定なんですが実は違いが明確にわからなかったです... ローカルで xcodebuild のhelpコマンドを叩いて見てみると、似た様な設定があるのでそちらを参考にしても $ xcodebuild -help ... -disableAutomaticPackageResolution prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file -onlyUsePackageVersionsFromResolvedFile prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file Package.resolvedファイルに記録されているバージョン以外に、パッケージが自動的に解決されるのを防ぎます。 となっているので全く同じ内容しか書かれてないんですよね... 一応SwiftPMのIssueでも同じ様な内容の質問があって、この対応で解決しているので、現状はこれで問題ないと思います。 https://github.com/swiftlang/swift-package-manager/issues/6914 ひとまず、この2つの設定を削除することで、SwiftPMが参照するライブラリの依存関係をPackage.resolvedのみ参照し、依存関係を解決してくれる様になります。 結論 Xcode Cloudで起動前に参照される ci_scripts/ci_post_clone.sh に .netrc を設定することで、プライベートリポジトリを参照できるようになり、ライブラリの依存関係を解決するために、 defaults delete を設定することで、Xcode Cloud上でのビルドが通るようになりました。 #!/bin/sh defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc 最後に Fastlaneは古くからある偉大なツールですが、Xcode Cloudの利用により、App申請までのフローをシンプルにできました。 先にも述べましたが、Xcode Cloudを使うことによるメリットは多いので、ぜひ導入を検討してみてください。 Appendix https://developer.apple.com/documentation/xcode/writing-custom-build-scripts https://speakerdeck.com/ryunen344/swiftpm-with-kmmwoprivatenagithub-releasedeyun-yong-suru https://qiita.com/tichise/items/87ff3f7c02d33d8c7370 https://github.com/swiftlang/swift-package-manager/issues/6914
アバター
はじめに こんにちは。モバイル開発のヒロヤ (@___TRAsh) です。 弊社ではいくつもの内製開発プロダクトがありますが、多くのプロダクトでXcode Cloudを採用しています。 Xcode CloudはAppleが公式に提供しているCI/CDサービスで、iOSアプリのビルドやCD(TestFlightへのデプロイ)を自動化できます。 今回は、Xcode Cloudでプライベートリポジトリをライブラリとして取り込む方法についてあまり参考資料が無く、ビルドを通すのに苦労したので、調査した結果をこちらにまとめておきます。 対象 今回の内容はiOS環境のCI/CDの話になるので、ある程度iOS開発の知識がある方を対象としています。 環境 - Xcode 15.4 - SwiftPMでライブラリを管理してる - 参照しているライブラリにプライベートリポジトリがある - GitHub Actions + FastlaneでTestFlightへデプロイしてる やりたいこと TestFlightのデプロイをGitHub Actions + FastlaneからXcode Cloudに移行したいと考えています。 これをすることで、Fastlaneへの依存をなくすことができ、App申請までのフローに必要なツールを減らすことができます。 また、申請に必要な証明書の管理もXcode Cloud上で直接Apple Developerの証明書を参照してくれるので、証明書の管理も楽になります。 悩み ここまで利点しかないXcode Cloudですが、ライブラリとしてプライベートリポジトリを参照する際には、ユーザー認証が必要になります。Xcode Cloudではそういった認証設定が考慮されていないため、プライベートリポジトリをライブラリとして参照するには一工夫する必要があります。 そこでXcode Cloudから提供されている ci_scripts/ci_post_clone.sh を活用して認証の設定することで、プライベートリポジトリを参照できるようになります。 .netrcの設定 Xcodeには12.5の頃から .netrc を参照する機能が追加されています。 .netrc はユーザー名とパスワードを記述したファイルで、 ~/.netrc に配置することで、 git clone 時に認証情報を自動で入力してくれます。 また、今回はライブラリをプライベートリポジトリのGitHub Releaseで管理する方法をとっているので、 api.github.com も追加しています。 touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc ユーザー名とアクセストークンをXcode Cloudの環境変数にSecretで設定しておき、 ci_post_clone.sh で参照するようにしています。 追加のリポジトリにURL追加 App Store Connect内のXcode Cloud の設定にある 追加のリポジトリ にライブラリのリポジトリURLを追加します。 defaults delete で設定を削除する 上記の設定でプライベートリポジトリのライブラリを取得できる様になってもまだライブラリの依存関係を解決できず、以下の様なエラーに遭遇しました。 :::message alert Could not resolve package dependencies: a resolved file is required when automatic dependency resolution is disabled and should be placed at XX/XX/Package.resolved. Running resolver because the following dependencies were added: 'XXXX' ( https://github.com/~~/~~.git ) fatalError ::: このエラーはXcode Cloud上でSwiftPMが、Package.resolvedを参照せず、自動でパッケージのバージョンを解決しようとしていることが原因です。 それぞれ、Xcodeのdefaultsを削除するとうまくビルドが通る様になります。 defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution この2つの設定なんですが実は違いが明確にわからなかったです... ローカルで xcodebuild のhelpコマンドを叩いて見てみると、似た様な設定があるのでそちらを参考にしても $ xcodebuild -help ... -disableAutomaticPackageResolution prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file -onlyUsePackageVersionsFromResolvedFile prevents packages from automatically being resolved to versions other than those recorded in the `Package.resolved` file Package.resolvedファイルに記録されているバージョン以外に、パッケージが自動的に解決されるのを防ぎます。 となっているので全く同じ内容しか書かれてないんですよね... 一応SwiftPMのIssueでも同じ様な内容の質問があって、この対応で解決しているので、現状はこれで問題ないと思います。 https://github.com/swiftlang/swift-package-manager/issues/6914 ひとまず、この2つの設定を削除することで、SwiftPMが参照するライブラリの依存関係をPackage.resolvedのみ参照し、依存関係を解決してくれる様になります。 結論 Xcode Cloudで起動前に参照される ci_scripts/ci_post_clone.sh に .netrc を設定することで、プライベートリポジトリを参照できるようになり、ライブラリの依存関係を解決するために、 defaults delete を設定することで、Xcode Cloud上でのビルドが通るようになりました。 #!/bin/sh defaults delete com.apple.dt.Xcode IDEPackageOnlyUseVersionsFromResolvedFile defaults delete com.apple.dt.Xcode IDEDisableAutomaticPackageResolution touch ~/.netrc echo "machine github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login $GITHUB_USER password $GITHUB_ACCESS_TOKEN" >> ~/.netrc 最後に Fastlaneは古くからある偉大なツールですが、Xcode Cloudの利用により、App申請までのフローをシンプルにできました。 先にも述べましたが、Xcode Cloudを使うことによるメリットは多いので、ぜひ導入を検討してみてください。 Appendix https://developer.apple.com/documentation/xcode/writing-custom-build-scripts https://speakerdeck.com/ryunen344/swiftpm-with-kmmwoprivatenagithub-releasedeyun-yong-suru https://qiita.com/tichise/items/87ff3f7c02d33d8c7370 https://github.com/swiftlang/swift-package-manager/issues/6914
アバター
はじめに こんにちは! KINTOテクノロジーズ プロジェクト推進GのRen.Mです。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は技術的な話ではなく、社内活動についてご紹介したいと思います! この記事の対象者 社内部活動に興味のある方 社員同士のコミュニケーション不足を感じている方 社内部活動とは 弊社には部活動のカルチャーがあり、社内にいくつもの部が存在しています!(ex.フットサル部、ゴルフ部など) 部活動ごとにSlackのパブリックチャンネルが存在し、参加は個人の自由で誰でも気軽に入部できます! 中にはいくつも掛け持ちしている人もいるようです! 私の所属しているバスケ部ではオフィス近隣の体育館を借りて夕方から3時間ほど練習会を行なっています。 体育館の利用は抽選にはなるのですが、基本的に毎月欠かさず活動しています! また、スムーズに活動するために、 毎月体育館を予約する人 利用料を支払いに行く人 部費を管理する人 などと有志で役割分担をしています。 予約が確定するとSlackでアナウンスをして参加者を募ります! 日によりますが参加人数は10人前後が多いです! 活動風景 部活動を通して感じたこと 気分転換できるようになった 弊社にはエンジニアが多く在籍しており、デスクワークしている社員がほとんどです。 また、自宅で業務を行なうこともあるためどうしても運動不足になりがちです。 そのため部活動を通して運動することで心も体もリフレッシュできます! ただ、どうしても白熱してしまうのでケガだけはしないように注意しながら練習しています! 他部署の社員と交流できるようになった 個人的に部活動の一番の強みはこの部分だと感じます。 部には様々な部署の社員が所属しているため、普段業務で関わらない社員とコミュニケーションを取ることができます。 ミーティングで初めて顔を合わせるより、部活動を通してあらかじめ交流していた方がその先の業務をスムーズに進めることができるかもしれません。 また、新入社員の方が会社に馴染むきっかけになればいいと思います。 おわりに いかがだったでしょうか。 社内部活動は社員同士の親睦を深めつつ、リフレッシュできる良いカルチャーだと思います! ぜひ弊社に入社した際は部活動を通して様々な社員と交流してみてください! また、テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
はじめに こんにちは! KINTOテクノロジーズ プロジェクト推進GのRen.Mです。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は技術的な話ではなく、社内活動についてご紹介したいと思います! この記事の対象者 社内部活動に興味のある方 社員同士のコミュニケーション不足を感じている方 社内部活動とは 弊社には部活動のカルチャーがあり、社内にいくつもの部が存在しています!(ex.フットサル部、ゴルフ部など) 部活動ごとにSlackのパブリックチャンネルが存在し、参加は個人の自由で誰でも気軽に入部できます! 中にはいくつも掛け持ちしている人もいるようです! 私の所属しているバスケ部ではオフィス近隣の体育館を借りて夕方から3時間ほど練習会を行なっています。 体育館の利用は抽選にはなるのですが、基本的に毎月欠かさず活動しています! また、スムーズに活動するために、 毎月体育館を予約する人 利用料を支払いに行く人 部費を管理する人 などと有志で役割分担をしています。 予約が確定するとSlackでアナウンスをして参加者を募ります! 日によりますが参加人数は10人前後が多いです! 活動風景 部活動を通して感じたこと 気分転換できるようになった 弊社にはエンジニアが多く在籍しており、デスクワークしている社員がほとんどです。 また、自宅で業務を行なうこともあるためどうしても運動不足になりがちです。 そのため部活動を通して運動することで心も体もリフレッシュできます! ただ、どうしても白熱してしまうのでケガだけはしないように注意しながら練習しています! 他部署の社員と交流できるようになった 個人的に部活動の一番の強みはこの部分だと感じます。 部には様々な部署の社員が所属しているため、普段業務で関わらない社員とコミュニケーションを取ることができます。 ミーティングで初めて顔を合わせるより、部活動を通してあらかじめ交流していた方がその先の業務をスムーズに進めることができるかもしれません。 また、新入社員の方が会社に馴染むきっかけになればいいと思います。 おわりに いかがだったでしょうか。 社内部活動は社員同士の親睦を深めつつ、リフレッシュできる良いカルチャーだと思います! ぜひ弊社に入社した際は部活動を通して様々な社員と交流してみてください! また、テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
ごあいさつ みなさまこんにちは。 モバイルアプリ開発グループの中口と申します。 みなさまはiOSDC Japan 2024はいかがでしたか?? 今年は8月開催ということもあり、例年以上の熱気でお祭りムードだったのではないでしょうか!! こちらの記事は、 iOSDCに参加された方 iOSエンジニアの方 カンファレンスが好きな方 に読んでいただけたら嬉しいです。 昨年までのiOSDCの参加状況として、弊社は希望者がそれぞれ自由に参加して、参加者のみ後日社内勉強会でLT形式の共有を行ったり、テックブログを執筆したりする程度でした。 しかし2024年のKINTOテクノロジーズは一味違います!! 今年は、 「スポンサーになった」「プロポーザルを何人か書いた(しかも1名採択された、すごい🎉!!)」「iOSDCの振り返りイベントを開催した」 と盛りだくさんで臨みました!! 最後の締めくくりとしてこちらのブログを執筆させていただきます!! スポンサーの話 KINTOテクノロジーズは今年初めてiOSDCのスポンサーをしました🙌!!! 弊社は、これまで社内の横断的なイベントを盛り上げてくれていたテックブログチームが技術広報グループとして生まれ変わり、対外的なイベントにもより一層力を入れております!!今回参戦したiOSDCだけでなく、DroidKaigi2024、Developers Summit KANSAI(デブサミ関西)もスポンサーをしております。このように、どんどん大型カンファレンスに顔を出していっております! その中で、今回のiOSDCではモバイルアプリ開発グループのiOSエンジニアが主体となって、上記の技術広報グループやクリエイティブ室など様々な方からサポートを受けつつ、全社を上げてスポンサーに臨みました。 詳しくは弊社のメンバーが、別記事でまとめてくれていたり、後述するiOSDCの振り返りイベントにて発表したりしているのでそちらをご覧ください!! 【テックブログ】はじめてのiOSDCスポンサー日記 こちらでは、ノベルティなどの制作物を中心にご紹介しております!!!ぜひご一読ください! https://blog.kinto-technologies.com/posts/2024-08-21-iOSDC2024-novelties/ 【テックブログ】KINTOテクノロジーズはiOSDC Japan 2024のゴールドスポンサーです&チャレンジトークンはこちら 🚙 弊社社員のインタビュー記事が載っていますのでこちらもぜひご一読ください! https://blog.kinto-technologies.com/posts/sponsored-iosdc-japan-2024/ 【登壇スライド】iOSDC初出展までにした事を共有したい こちらでは、iOSDCのスポンサー出展について時系列でどんなふうに進めていったかを紹介しています!!カンファレンスでスポンサーに興味がある方には、参考になる内容が多いと思いますのでご興味あればぜひご一読ください!!! https://speakerdeck.com/ktchiroyah/iosdcchu-chu-zhan-matenisitashi-wogong-you-sitai プロポーザルの話 今年は初めて会社として初めてプロポーザル執筆会を行ないました🙌!!! 登壇に興味があるメンバーが集まって、 こういったスライド を参考に、どうやって書けばいいか、どんな内容を書けばいいかなど、みんなでガヤガヤしながら下記のプロポーザルを出しました!! https://fortee.jp/iosdc-japan-2024/proposal/7fd624c8-06ec-4dc4-960a-da37f74cf90f https://fortee.jp/iosdc-japan-2024/proposal/a82414cd-54d7-4abb-aa20-e35feb717489 https://fortee.jp/iosdc-japan-2024/proposal/e9e13b6d-0b74-4437-8ec0-ba6598b70ad7 https://fortee.jp/iosdc-japan-2024/proposal/ab0eeedf-0d4f-47a6-8df8-bd792b4d70ca そして下記が採択されています!!ほんとにすごい🎉!! https://fortee.jp/iosdc-japan-2024/proposal/25af110e-61d0-4dc8-aba5-3e2e7d192868 https://fortee.jp/iosdc-japan-2024/proposal/c3901357-0782-4fb5-89b8-cb48c473f066 その後、他社さんの事例などを聞いているとプロポーザルをレビューする会などがあったり、そもそものプロポーザルの数が違ったりなどうちも負けていられないなぁと思いました。来年はもっと頑張りたいですね! iOSDCの振り返りイベントを開催した こういった大型イベントにはアフターイベントがつきものでして、昨年もいくつかの会社さんがiOSDC振り返りイベント開催しておりました。 \そして今年は、弊社も開催しました🙌!!!/ なぜ開催したのか、開催までの経緯、当日の様子など、けっこう熱くブログにまとめていますので、こちらもぜひご一読ください!!! https://blog.kinto-technologies.com/posts/2024-09-12-after-iosdc/ ここからは、当日iOSDCに参加したメンバーがどんなセッションを見たのかをまとめましたのでご覧ください。 KINTOテクノロジーズ的セッション視聴ランキング 15名(うち4名のパートナー様含む)の参加があったので、みなさんがどんなセッションを見たのか集計しました。そちらをランキング形式にしています!! 弊社が、いまどんな技術に興味があるのか、ということが良く分かると思います!! 同率2位(6名): Swift 6のTyped throwsとSwiftにおけるエラーハンドリングの全体像を学ぶ https://fortee.jp/iosdc-japan-2024/proposal/c48577a8-33f1-4169-96a0-9866adc8db8e Typed throwsとは何か、そもそもその前提となるUntyped throwsとは何かを対比して説明してくれてとてもわかりやすかったです。 一見するとTyped throwsいいじゃんっと感じる内容でしたが、安易に使うべきでないという公式の発言などにも触れていただいて良かったと思いました。また、発表者のkoherさんの見解も聞けて勉強になりました。 同率2位(6名): 座談会 「Strict ConcurrencyとSwift 6が開く新時代: 私たちはどう生きるか?」 https://fortee.jp/iosdc-japan-2024/proposal/5e7b95a8-9a2e-47d5-87a7-545c46c38b25 ちょうど弊社でもSwift 6に向けたStrict Concurrencyの調査を進めており、非常に参考になるセッションでした。こちらで発表されていた内容を参考に対応を進めていければと思います。 また、座談会という形式が斬新でみなさんがそれぞれフォローしあっていてとても素敵でした。こういったタイプの発表がもっと増えてほしいなと思いました。 同率2位(6名): 開発を加速する共有Swift Package実践 https://fortee.jp/iosdc-japan-2024/proposal/52d755e6-2ba3-4474-82eb-46d845b6772c 弊社も複数のアプリを開発しているため、共有Swift Packageという仕組みは非常に魅力的だなと感じた反面、それぞれアプリの性質が違っているため共通化できる部分もなかなかなさそうだなぁというジレンマがあります。一方で共有Swift Package化するステップ(チーム構成とか運用方法とか)はとても勉強になりました。 同率1位(7名): ルーキーズLT大会 https://fortee.jp/iosdc-japan-2024/proposal/95d397a6-f81d-4809-a062-048a447279b3 こちら弊社メンバーの登壇もあったため応援に駆けつけました!! ペンライトで応援するスタイル楽しいですね!! 内容も興味深いものが多く、 来年挑戦してみたい 、というメンバーもいました! 同率1位(7名): App Clipの魔法: iOSデザイン開発の新時代 https://fortee.jp/iosdc-japan-2024/proposal/66f33ab0-0d73-479a-855b-058e41e1379b 弊社では、まだApp Clipを導入しているアプリはないため、導入してみたいと思っていたメンバーが多かったです。一方でApp Clipのコードを配布するのはどうしたらいいか、などの課題も出てきそうとのことでした。 下記にそのほかで視聴数が多かったものを掲載いたします。 4人視聴 iOS/iPadOSの多様な「ViewController」の徹底解説と実装例 iOSアプリらしさを紐解く LT大会 後半戦 クロスプラットフォーム普及増加。SwiftでiOS開発はもうやらないのか....? 複雑さに立ち向かうためのソフトウェア開発入門 5人視聴 iPhoneへのマイナンバーカード搭載におけるデータ規格についての理解を深める GraphQLとスキーマファーストで切り開くライドシェアの未来 また、集計したところ今回の1人辺りの平均セッション視聴数は11.25でした!!! おまけ 今年は、弊社もスポンサーブースを出したので、みんながどんなブースが印象に残ったのか興味がありアンケートとってみました!! 9名から回答を得られまして、結果はこちらです。(一票以上投票があったブースのみ掲載しております)          印象に残ったブースを集計しました       こちらをご覧いただくと、かなり票数がばらけたことが見て取れるのではないでしょうか。(弊社が6票集めているのは皆さん気を遣ってくれたのだと思います!) そう考えると、万人に受けるブースを作るのは難しいことなんだなぁという事も感じます。 そんな中4票集めているディー・エヌ・エーさんはさすがです。 終わりに 冒頭にも述べた通り、今年のiOSDCは全社的にかなり気合を入れて取り組みました! スポンサー、プロポーザル、振り返りイベント、どれをとっても個人的にはとても大満足でした。ただし、まだまだ改善できることたくさんがございますので来年はもっとパワーアップしてiOSDCに参加できたらと思います!! また、各セッションも例年通り非常に勉強になるものが多く、その点も改めて参加して良かったなと思います。
アバター