TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Introduction Hey there! I’m Viacheslav Vorona, an iOS engineer. In August this year, I had the opportunity to participate in iOSDC Japan 2024 , a conference for iOS developers held in Tokyo. It was my first time attending a conference where the sessions were conducted (almost) entirely in Japanese, so I was a bit nervous. However, it turned out to be easier than I expected. Thankfully, when looking at code snippets or listening to topics I’m at least partially familiar with, even if I didn’t catch every single word, I found it wasn’t too hard to follow the presenters. Special thanks to those who included English translations on their slides! That was as helpful as it gets ❤️ Now, I have attended several sessions. There were a couple of talks about curious hobby projects their authors are passionate about, like this session by ta.inoue about the principles behind GPS and demonstrated data sniffing of transmitted signals using an iOS device. It always fascinated me how the GPS system is so simple in principle yet tremendously complex in how it is actually built, so I decided to check this session out and wasn't disappointed. There was also the impressive deep dive into the history and purpose of various UIKit ViewControllers by haseken . Even though we encounter ViewControllers all the time in our job, UIKit is so extensive and some view controllers are so niche that it’s possible you haven’t even heard of some of them (there certainly were quite a few unfamiliar ones for me), so that session was really interesting as well. Today, however, I'd like to talk about a couple of things that I thought were worth keeping in the back of my mind for the future, as they might be useful in practice someday. Let’s take a look, shall we? Practical Use of Hidden APIs Every developer, every now and then, finds themselves in a situation where they need to take a closer look at some third-party code they are utilizing in their projects. We do this to figure out whether a library or framework satisfies the needs of our new project, to better understand the tools we are already using, and sometimes even to track down and report bugs in third-party code. There is, however, a special category of frameworks that we all use daily but are still some of the most obscure: the iOS frameworks, such as SwiftUI and UIKit. Along with the well-documented and recommended APIs, these frameworks have a lot of capabilities we aren’t aware of. Finding and using those capabilities can not only be an interesting exercise but also benefit iOS developers in niche situations. This was the focus of the session titled iOSの隠されたAPIを解明し、開発効率を向上させる方法 (How to unlock hidden iOS APIs and improve development efficiency) by noppe . noppe split his presentation into three parts: Perform : This part was a short introduction on how one could expose hidden methods both in ObjC (by replacing the .h files of the classes we are interested in) and in Swift (by fiddling with .tbd or .swiftinterface files). Use Case : In this part, probably the longest, noppe listed scenarios where hidden APIs could potentially be used: Prototyping: useful and safe, as you can cut corners in non-critical code without impacting functionality, even if those APIs change. Testing: useful and relatively safe, as you can more easily cover some testing scenarios, but it is better to cover the undocumented APIs with tests as well to quickly detect any potential changes. Production: obviously unsafe, not only because the APIs are not guaranteed to perform the same way in the future, but also because your app might get rejected from the App Store. Find : The last part was dedicated to finding hidden APIs primarily through checking .h , .tbd , and .swiftinterface files, analyzing stack traces, and engaging with the community to see what others have found. In his presentation, noppe also provided several examples of situations where hidden APIs might be useful. Let’s take a look at them: For the prototyping stage, one memorable example was related to UITextView . As you may know, although having a placeholder in a text view is a common feature, Apple doesn't provide an accessible API for setting a placeholder. This usually means having to create custom solutions, such as adding label subviews. However, it turns out that UITextView has an unexposed method, setAttributedPlaceholder , that does exactly that. Even though it’s not allowed in production, you can still use it during prototyping or proof of concept stages to save time. An example related to testing involved UIDebuggingInformationOverlay , a tool that was easily accessible prior to iOS 11 but now requires some fiddling to access. It is still possible to enable it by utilizing hidden, low-level features of UIKit. UIDebuggingInformationOverlay Learning about undocumented APIs can offer developers a deeper understanding of the tools they use daily. While using these APIs in production is not recommended, knowing they exist expands the possibilities in situations where unconventional solutions are needed. Additionally, understanding how they work is a great way to improve your own API design skills. Overall, it was a really practical and inspiring session, which I’m glad I attended. @ speakerdeck Server-driven UI As developers, we are usually striving to keep up with the latest trends in the industry, either to be the first ones to dive into a promising framework or technology or simply to avoid being left behind by our more grasping colleagues. At iOSDC this year, however, I stumbled upon a session about a design paradigm I knew little about, even though it has been around for some time. The session was Server-Driven UI入門: 画面のStateを直接受け取るアプローチ (Introduction to Server-Driven UI: An approach to directly receive the screen state) by Nade . Server-driven UI appears somewhat niche, but I feel that keeping an eye on it and at least knowing that it exists is worth the effort. Server-driven UI involves receiving the UI state directly from the backend. It enables changes to the client-side UI without releasing a new version of the app and aims to reduce the amount of client-side business logic. This is achieved by implementing a set of predefined UI components within the client application, which should be simple and easily reusable across different parts of the app. Server responses then dictate which of these reusable UI components should be displayed on each screen, in what order, and what content they should include. In the case of iOS, the approach is even more natural when using SwiftUI, as views can adopt the Codable protocol and essentially mirror the server responses. Nade clearly demonstrated this in his presentation. Of course, adopting such a paradigm leads to certain trade-offs compared to traditional implementations. Let’s take a quick look at them: Server-driven UI Pros: Faster release cycles for new features, as you don’t need to release new app versions, and changes can be made on the backend. All business logic is shared between platforms, resulting in DRYer code, with changes propagated immediately across all clients. With minimal (ideally no) business logic on the client side, developers can focus on aspects like polishing the UI/UX to provide a smoother experience. Server-driven UI Cons: The need to define and build a large set of generic UI elements beforehand, which can be overwhelming for small teams. Testing all possible combinations of these generic UI elements can be costly. Server-side architecture becomes more complex with additional "Backend for Frontend" layers. As you can see, Server-driven UI is a solution most suitable for larger teams capable of managing the added backend complexities and infrastructure. However, it can offer significant benefits in terms of client flexibility and delivery speed. @ speakerdeck In the End iOSDC Japan 2024 was both an entertaining and valuable experience for me. Among the sessions I attended, the two I’ve shared today stood out as the most useful in practice. I hope you find them interesting as well, especially now that the iOSDC organizers have published the session recordings. Thank you, and until next time!
アバター
はじめに こんにちは!新車サブスク開発G、Osaka Tech Lab 所属のhigh-g( @high_g_engineer )です。 本記事では、2024/9/18 に開催された Developers Summit 2024 KANSAI(以下、デブサミ関西)の振り返りをお届けします。 デブサミ関西とは 本家のDevelopers Summit(以下、デブサミ)は、2003年から毎年東京およびオンラインで開催されているソフトウェア開発者向けのカンファレンス型イベントです。 デブサミ関西は、デブサミのスピンオフイベントとして2011年より開催され、関西のITエンジニアのためのお祭りとして親しまれています。 2024年のテーマは、「みんなで創ろう、ニュースタンダード」。 セッションでは、セキュリティ、AI、開発手法、開発生産性、DevOps、エンジニアキャリア等といった幅広いテーマが扱われました。 当日は、どのセッションもほぼ満席となっており、スポンサーブースと併せて、イベント全体で賑わいを見せていました。 https://event.shoeisha.jp/devsumi/20240918 Osaka Tech Lab 初協賛 弊社は、スポンサーブースの出展とスポンサーセッションの登壇で参加させていただきました。 今年度からKINTOテクノロジーズは、様々なイベントに対して協賛をさせていただいておりますが、 Osaka Tech Lab (KINTOテクノロジーズ 大阪オフィスの愛称)としては、イベントへの協賛が初の試みだった為、 どういった内容のブースにするか ノベルティはどういったものにするか 当日の案内をどの様に行うか などの打ち合わせを念入りに行い、万全の体制で当日に臨みました。 スポンサーブースについて 完成したブースはこちら! せっかくなので、いくつかのアイテムに触れていこうと思います。 ![KINTOテクノロジーズのスポンサーブースの完成形](/assets/blog/authors/high-g/20241030/img4.jpg =512x) データで見る Osaka Tech Lab 写真中央にあるグラフや数値などが書かれたボードについてです。 これは、Osaka Tech Labの所属メンバーからヒアリングした情報を可視化したものです。 全体がプラモデルのパーツを模したデザインになっていて面白いですね! 弊社のデザイナーに急遽作成いただいたものですが、とても良い仕上がりです。 ![Osaka Tech Labがグラフが可視化されたボード](/assets/blog/authors/high-g/20241030/img5.jpg =512x) アンケートボード 次に写真右側にある、アンケートボードについてです。 これは、ブース来場者からのアンケートを取るためのボードです。 該当する箇所へシールを貼っていただき、どういった属性の方々がいらっしゃるかを可視化します。 ![シールを貼るアンケート用のボード](/assets/blog/authors/high-g/20241030/img6.jpg =512x) アンケート結果 KINTOやKINTOテクノロジーズに対し、現時点で一定の認知があるものの、アプローチが出来ていない層もまだまだ存在することが分かりました。 職種に関しては、「バックエンド」と「フロントエンド」の間にシールを貼られている方が多く、イベントの性質も影響しているかもしれませんが、関西にはスペシャリストよりもジェネラリスト寄りなエンジニアが多く存在している可能性があることが見えてきました。 Osaka Tech Labの認知拡大の為に重要なデータが取れたと思います。アンケートボードにご協力いただきありがとうございました! ![アンケート結果](/assets/blog/authors/high-g/20241030/img7.jpg =512x) くもびぃ人形 写真左側に鎮座している白いふわふわ。 KINTOの公式マスコットキャラクターの「 くもびぃ 」です。とてもかわいいですよね! ブース来場者に非常に好評で、FindyさんやKIKKAKEさんのXでも触れていただきました。 ありがとうございます! ![Twitter画像_Findy](/assets/blog/authors/high-g/20241030/img8.jpg =512x) ![Twitter画像_Kikkake](/assets/blog/authors/high-g/20241030/img9.jpg =512x) ノベルティ 当日はブースへ来ていただいた方の内、先客100名様に、マルチカードツール(1枚で複数の役割を担うアルミ製のツールセット)をプレゼントさせていただきました。 また、弊社のスポンサーセッションをお聞きいただいた方には、くもびぃの紙クリップも併せて配布させていただきました。 ![マルチカードツール](/assets/blog/authors/high-g/20241030/img10.jpg =512x) ![くもびぃの紙クリップ](/assets/blog/authors/high-g/20241030/img11.jpg =512x) スポンサーセッションについて スピーカーは、Osaka Tech Lab 初期メンバーの沖田さん。 セッションテーマは、「めざせ、PM 兼 モバイルアプリエンジニア!モビリティの未来を大阪から〜トヨタグループで叶える、私とOsaka Tech Labの挑戦〜」 セッションの概要は以下です。 沖田さんのこれまでのキャリア紹介 社内前例がない「PM 兼 モバイルアプリエンジニア」というキャリアへの挑戦 Osaka Tech Labでプロダクトの立ち上げ・開発・運用するための種まき 「想いを消さないこと」というフレーズが印象的で、モチベーションが下がる様なときも諦めず、辞めずに続けてきたから今の沖田さんのキャリアがあるのだと思いました。 おわりに 今回、デブサミ関西に初参加させていただきましたが、平日にもかかわらず、多くの方がイベントに参加されていて非常に驚きました。 私自身、自社ブースのお手伝いをしながら、いくつかのセッションを拝聴させていただいきましたが、 学びになることが多く、ネクストアクションに繋がる様な多くの種が生まれて、イベント全体をとても楽しんでおりました。 幅広い技術領域を扱った大規模カンファレンスは関西では珍しいので、この熱量が絶えず、ずっと続いて欲しいと思いました。 デブサミ関西の運営スタッフの皆様に感謝申し上げます。ありがとうございました。
アバター
[[[Link to Amazon]]]( https://amzn.asia/d/06GXK0Fd ) I’ve been thinking about compiling a summary of the key takeaways from Vision , co-authored by Hans P. Bacher and Sanatan Suryavanshi, to help retain its insights. Since it’s such a valuable book, I’d love to share some of those insights with you here. Visual designs that surround us in our daily lives evoke a wide range of emotions. This book reveals why certain visuals leave a strong impression, and how to understand the psychology behind them. The authors offer practical methods for storytelling through visuals, demonstrating how the selection of colors and shapes can affect emotions. This allows us, even those without professional design backgrounds, to enhance our everyday visual experiences. I believe reading Vision will give us a fresh perspective on our daily lives. If you're interested in this blog post, I suggest taking a look at it. This book consists of: Foreword Introduction The Visual Communication Process The Psychology of Images Line Shape Value Color Light Camera Composition Conclusion In this article, I’d like to provide a brief overview of the sections "The Visual Communication Process," "The Psychology of Images," and "Lines." The Visual Communication Process The authors explain that the visual communication process is an automatic response, where whatever enters the eye immediately triggers a variety of emotions. For instance, just by glancing at a movie poster showing "a shadow stretching in a dim alley" and "a terrified person standing there," we instinctively understand that the film is centered around fear and horror. This instantaneous emotional response is triggered automatically. This book states that its goal is to empower readers to break down these automatic processes into elements and to understand why these feelings are triggered. The following chapter immediately explores the psychological aspects of these automatic processes. The Psychology of Images Why do we feel relaxed or fearful when we look at an image? In describing this process, it mentions three elements of the psychological aspect that images exert. (1) Association (2) Mechanics (3) Resonance (1) Association For example, the combination of a dark alley and a shadow is commonly associated with fear. Thus, images and videos are connected to our past memories, and the brain automatically recalls certain emotions when we see them. This is similar to the process of association. Thus, by carefully choosing and combining the appropriate visual elements, the work can leave a strong and lasting impression on the viewer. (2) Mechanics The combination of elements such as lines, shapes, and colors plays an important role in visual design. For example, placing opposing colors*1 next to each other creates contrast and produces a visual stimulus. In this way, visual elements interact to create stimulation and harmony. (3) Resonance "Resonance is what happens when you align what you are saying with how you are saying it." (from page 20) For example, if bright, pop colors are used in a scene portraying the tragic death of a loved one, the visual style will clash with the content, and the sadness won't resonate with the viewer. The authors emphasize that the appeal of an image can be enhanced by actively combining such design elements as color. They also state that these elements should not be random or accidental but deliberately selected to appeal to the viewer's emotions. The Anatomy of an Image This chapter is about anatomy. The authors explain that it is possible to build a way of 'looking' by dissecting the image using the items listed below, and that this is the first step in visual storytelling. They also recommended referring to it constantly. Subject: Literally, the subject. Format: The shape and proportion of an image. Orientation: Horizontal or vertical aspect. Framing: The arrangement in a composition. Line: The linear component. Shape: The shapes within the frame. Value: The degree of brightness or darkness. Color: Literally, the colors. Pattern: A design or repeated elements. Silhouette: A blacked-in outline of an element in the design. Texture: Indication of the tactile quality of a component. Light: The illuminating element. Depth: A sense of space. Edge: The quality of the separations between the shapes. Movement: Any moving element. Line This refers to "structural lines" and "compositional lines" and create a path for the eyes to travel on. Although it seems elementary and is often overlooked, the authors point out that it encompasses various aspects and holds the potential to enable a wide range of creative productions. The diagrams below show examples of the main lines (partial excerpt). Frame borders. Applicable to all 1 to 4. The top, bottom, left and right border lines that are always present in any composition. 1 and 2: Figures in a composition create structural lines through their orientation. 3: The actual and implied movement of the object forms a clear line. 4: The dark mass create structural lines. Orientation Line direction refers to the orientation of a line in relation to the top, bottom, left, and right edges of the frame. Emotions can be conveyed through the direction of lines. When paired with the right motifs, these lines can express a wide range of deep emotions. Examples) Vertical: Represents strength against gravity and elegance, as seen in tall objects like trees and buildings reaching skyward. Diagonal: In contrast to horizontal and vertical lines, diagonals create drama, energy, and a sense of movement, symbolizing broken balance and motion. Horizontal: Symbolizes calm and tranquility, often associated with horizons, oceans, and expansive open spaces. Placement The placement of the lines divides the frame and creates shapes. The balance of the shapes changes the attractiveness of a composition. Equally divided, symmetrical: Unnatural, artificial. Asymmetrical: Balance makes it attractive. Trisection, golden ratio, etc. Quality The quality and characteristics of the lines evoke strong emotions. Straight lines: Tension Curve: Soft Broad lines: Strong and sturdy Ultra-fine lines: Refinement, delicacy Harmony & Contrast The moment a line is drawn within a frame, it establishes either harmony or contrast. In other words, the interaction between lines generates rhythm, harmony, disharmony, balance, imbalance, unity, and more For instance, a horizontal line at the bottom of a frame creates harmony, but turning it into a diagonal line instantly introduces contrast. However, when harmony or contrast is pushed too far, it can result in boredom or excessive complexity. Therefore, it's essential to strike the right balance between the two. Rhythm Repeating lines establish rhythm and introduce a new dimension to the composition. Regular lines at regular intervals: Orderliness, (boredom) Random repetition: Energetic, tense [Conclusion] The thoughtful use of well-coordinated design elements can function as a powerful mechanism to create visuals that deeply resonate with the viewer. Even a basic element like a line can evoke emotions, create tension, provoke boredom, and establish harmony or contrast. The authors repeatedly emphasize the importance of not getting lost in the details, urging readers to "think simple." They add, "By repeating this process, you will develop a deeper understanding of composition and learn to apply it in your own distinct style." I’ve written the above as an introduction to the opening section of the book and I hope that this brief overview broadens your perspective on visual analysis. I look forward to sharing insights from other chapters if the opportunity presents itself.
アバター
Hi! My name is Ryomm and I am developing my route (iOS) at KINTO Technologies. The other day on July 5th, the Muromachi Information Sharing Meeting Lightning Talk (LT) Tournament 🎋Tanabata Special🎋 was held! This time, I bring you a transcribed version of the lightning talk I gave there, "Master Slack! Be 3000 times more efficient with Slack." See the materials here @ speakerdeck (in Japanese) Motivation Are you making the most of of your Slack? With its variety of features, Slack could help resolve some of the minor frustrations you face at work once you get the hang of it. In this quick guide, we will explore the basics of Slack. As you read, try to think, "I could use this feature for that!" or "It might be interesting to combine these two!" Keep these possibilities in mind as we go along. Mastering "Search" First, the search function! Slack's search function can be used in a variety of ways by making full use of queries. In addition to standard phrase and negative searches like on Google, Slack also lets you filter by date range, search within specific channels, look up messages based on reaction emojis, and even search by the source of shared messages. Of course, you can also search by filter using the GUI without having to remember the query, but if it's something you see often, you can make a note of the query and simply paste it into the search bar to search, which will speed things up. https://slack.com/intl/ja-jp/help/articles/202528808-Slack-内で検索する Mastering “Custom responses” Next is the custom response! You can set any response to slackbot. You can set it by opening the Slackbot customization page. If you call one of the phrases you set in the left part, it will randomly return one of the answers next to it. You can take advantage of these properties to spin the dice randomly, as shown on the right. https://slack.com/resources/using-slack/a-guide-to-slackbot-custom-responses Mastering “Mentions” Use channel mentions and here mentions when you want to spread your message widely. By using @everyone you can mention the entire workspace, however, there aren’t many opportunities to use it. These mentions cannot be used when notifications are turned off or in a thread. Also, generally, mass mentions are more likely to clutter notifications, so think about the time and situation before using them. https://slack.com/help/articles/202009646-Notify-a-channel-or-workspace Mastering “User groups” That’s where user groups come in. You can create user groups to mention multiple people at once, or add groups to a channel. Furthermore, you can register channels associated with a group, so even if you want to add someone to multiple channels for onboarding, you can just add them to the user group. https://slack.com/help/articles/212906697-Create-a-user-group Mastering “Stock-type information” Information can be broadly classified into two types: Stock-type and Flow-type. Stock-type information is accumulated information, and flow-type information is transient information. Regular Slack interactions are flow-type, and stock-type information that should be stored is often summarized in Confluence. There are several ways to keep flow-type information in Slack as stock-type. These are Watch Later, Pin, Canvas, Bookmarks, and Lists. They work well with Workflow and Search, so they are easy to combine and work with. Mastering “Notifications” You can also customize your notifications settings quite freely. You can divide channels into sections, mute and display settings for each, set keywords to notify you when a specific term is called, and use the Reacji Channeler app to forward and notify messages to a specific channel when a certain emoji is used. Mastering “Reminder” You can send a reminder to a specified channel. It can also be set on repeat. It's a good idea to use it in conjunction with the Send function after a workflow or message! https://slack.com/intl/ja-jp/help/articles/208423427-set reminder Mastering “Huddle” You can make calls in Slack using Huddle! The best thing about huddles is that you can retain your chats in Slack! Another advantage is that you can join a call without needing to know the URL, making it easy to jump in and participate. You can also create a URL to join the Huddle, so you can jump in from your Outlook calendar. You can also enjoy the Huddle music while working alone. Mastering “Email forwarding” You can also send emails to a Slack channel. On the mailer side, you can filter the emails you want to share with your team and forward them to the email address for the desired channel. Mastering “Workflow” Workflow is the key to automation in Slack! It can be used in many cases, such as when you want to standardize request templates, when you want to consolidate collected data in a spreadsheet, when you want to combine actions, do onboarding for people who join a channel, etc. It can also be combined with spreadsheets and GAS, and the possibilities are even wider. https://slack.com/intl/ja-jp/help/articles/17542172840595-create workflow---create workflow with Slack Mastering “Custom App” If you want to do something that is difficult to achieve in workflow, you can also use the SlackAPI to create a custom app. With the recent introduction of Slack CLI, building custom apps has become much easier. While coding is still required, the increased flexibility allows for a wide range of possibilities on Slack. https://api.slack.com/docs Conclusion I realized many people might not be very familiar with Slack, so I decided to present it during our Lightning Talk (LT). To my surprise, I received requests to turn it into a blog post! So, here it is. How useful did you find it? If there are any features you weren’t aware of, I encourage you to start trying them out today!
アバター
はじめに この記事は、組織の中でこれからリーダーになる人、または最近リーダーになった人に向けて、リーダー業をこなすにあたって役に立ちそうな考え方を、個人的な視点からまとめたものです。 私自身、KTC(KINTOテクノロジーズ)では2023年11月に初めてチームリーダーを任されることになりましたが、前職含めてそれまでまったくと言っていいほどリーダーを経験したことがありませんでした。なので経験のなさを補うため、アサインが決まったタイミングから「リーダーシップ・マネジメントの考え方」を書籍・Web記事・上司に求め、いくつかの考え方を学んできました。 学んだことすべてを網羅的にご紹介できればよかったのですが、理解が追い付かない・当たり前・今は必要ない・興味が湧かないといった理由で、いまの自分には響かなかったものもあります。それは「いまの自分にはインストール不可である」と諦めて、次の巡り合わせに任せることにしました。 そんな中で、いまの私がインストールした考え方を8個ご紹介します。 偏ったリストですが、何かしら皆さんの仕事のヒントになればうれしいです。 集団で狩ったマンモスを各自で山分けする 「組織のメリットは仲間との結束感だ」と言う人もいます。もちろんそういう面もあるでしょうが、それは副次的な利益です。 マンモスを狩った集団は、まさに同じ釜のメシを食べることになり、結果的に仲良くなったことでしょう。 肉が先にあり、仲間意識は、おまけでついてきます。 安藤 広大 『リーダーの仮面——「いちプレーヤー」から「マネジャー」に頭を切り替える思考法』 チームリーダーになることが決まって最初に手に取ったのが『リーダーの仮面』でした。私はリーダーについて、最初「メンバーのモチベーションを上げる」とか「メンバーと心を通わせる」みたいな側面を主にイメージしていたのですが、そんな中で触れた「識学」のドライさに戦慄した記憶があります。 ですがモデルとしての識学、特に、集団でマンモスを狩るから個々人に分け前が入るという 先に「集団の利益アップ」があり、そのあとに「個人の利益アップ」がある。これが正しい順番です。 という考え方には納得し、判断の指針の1つになっています。 管理職の役割は「目標達成」と「集団維持」 いろいろな考え方があると思うんですが、「PM理論」という、パフォーマンスとメンテナンスの機能に着目した理論があります。 それになぞらえて考えてみると、まずは「目標達成機能」。パフォーマンスの機能は目標達成機能です。メンテナンスは「集団維持機能」ですね。チームを維持したり、チームを活性化したりする役割・機能があるということです。 中間管理職の「過剰負担」は、なぜいつまでも解消しないのか? 見逃しがちな“落とし穴”とマネジメント再構築の4ステップ 引用元の記事では、環境と時代の変化によって目標達成・集団維持どちらも難易度が上がってきていること、最近は集団維持に重きが置かれすぎて戦略策定〜目標達成できる管理者が育たないこと、などが述べられています。 が、その辺の難しいことは置いておいて、シンプルに「管理職の仕事は『目標達成』と『集団維持』である」というフレームを得たことで、管理者である自分が何をしたらいいのかがはっきりしたのが精神衛生上よかったです。 目的を伝えて手段は任せる スタッフや部下のトライアンドエラーを「ロス」ととらえ、スピードを速くするために、最初から正解を教えたり、手取り足取り教えたりしてしまう。この考え方は、絶対にNGです。 経験とともにしか人は成長しません。 (中略) 答えを与える組織は、結果として速度が遅くなります。部下が成長しないので、長い目で見ると速度は落ちるのです。 安藤 広大 『リーダーの仮面——「いちプレーヤー」から「マネジャー」に頭を切り替える思考法』 力量が十分にあると思われる人には「目的だけ」を伝えて、あとは任せるのがいいでしょう。 若干、力量不足かと思われる人については、「目的と、自分ならこうやる」という具体的な行動のヒントを与える。 そして力量が足りていない、まだ未熟だという人については、「目的と行動」を一緒に伝えるということです。 山口 周『外資系コンサルが教えるプロジェクトマネジメント』 マイクロマネジメントはたいていの場合悪手である、というのはなんとなく理解していますが、そもそも自分にマイクロマネジメントし続けるようなキャパシティなどないことはチームリーダー初日から自覚しているので、マイクロじゃないマネジメントを(現在進行形で)模索しています。 実現すべきゴール状態と満たすべき要件だけ握ってあとはメンバーに一任する、一任されても進められるようメンバーをサポートする、という関わり方を目指しています。 好きな仕事なら負担に感じない 好きな仕事なら負担に感じない。それは人それぞれで、単純な量では測れない。 それぞれのメンバーがどんな仕事を好きなのかを掴んでおくと良い。 きちんとタスク・工数管理するのもいいが、手間のわりに劇的な効果はない。 「メンバーの負荷状況が把握できていないので、仕事を依頼するキャパがメンバーにあるかどうか判断できない」という相談をマネージャーにした時にもらったアドバイスです。「負荷状況を把握するために各自のタスクを細かく見たほうがいいですかね?」という質問に対して、「見てもいいけど手間に見合う効果はない。厳密なタスクマネジメントではなく、人の好き嫌いを把握して、それに応じた仕事の割り振りができているかを確認するのがが良いよ」とアドバイスをもらいました。負荷状況は残業時間で把握しつつ、適材適所になるように各自の好き嫌いを把握しておけ、ということです。 細かくタスク・工数管理すると、可視化や合理化のための辻褄合わせなど本質的でないところに時間を取られる気がしましたし、そもそも細かくマネジメントできるような力量も自分にはなさそうでした。なのでそれはやめて、基本的にはメンバーの担当領域に応じて仕事をお願いしつつ、「人の好みに応じて仕事の割り振りができているか?」という観点を意識するようになりました。 時速60kmで交差点に突っ込めるようにする 例外をつくってしまうと、チームや組織は、非常に脆くなります。 「急いでいるから赤信号でも走っていいと思ったんです」 そんな車を1台でも許してしまうと、道路は一気に混乱します。 安藤 広大 『リーダーの仮面——「いちプレーヤー」から「マネジャー」に頭を切り替える思考法』 あなたの周りの業務で「決められたことを 100% 守ってくれる」と信用できるメンバーがどれだけいますか?納期、月初データ処理、エスカレーション、ファイル共有などなど、仕事には様々なルールがあるはずです。逆に信用できないとどうなりますか? 暗黙的に誰かがこっそり監視をしていたり、リマインドしてくれたり、裏でデータやモノを手直ししてくれていたりしませんか?結局、誰かがべったりとタスクに張り付くことになるのです。 これが、疎結合化を阻害する大きな要因となります。 仕事を「疎結合化」デキる人(実践編3)|りなる 業務の仕組み化は日々進めていますが、効率化のためには「担当者が誰にも確認せず単独遂行可能」という領域を増やしていきたいです。不確実な部分が1つでもあれば確認が必要になりますし、事故も起きます。確実なら確認は不要です。ルールがあること、そして確実にそのルールが守られていると誰もが信用できることが重要です。 もちろん、ルールに縛られて本来目指す価値から離れてしまっては本末転倒ですが、基本的には可能な限り、ベースとなるルールをたたき台でもいいので設定するようにしています。 定数と変数と「限りなく定数に近い変数」 林先生と森岡さん共に共通していたことが、この定数と変数を見極めて、変数を動かすこと。つまり、自分の力でどうにかできることに対して人生の時間とエネルギーを使うことが大切である点です。 『定数ではなく変数を動かす』 この考え方は社内の打ち合わせで部長から共有されて学びました。 確かに周りのマネージャーたちを見ていると、流れに逆らってなんとかするというよりは、「力学を見極めたうえで自然とそうなるように状況を整えている」ように感じます。 部長からの共有の中で特に印象的だったのは、 本来は変数だけど今は定数に近いこともある。(ここのミキワメが重要) という点です。 会社の中での文化や、制度や、人の考え方がそれにあたります。それはちょっとやそっとじゃ変わらない定数のような存在ですが、本当は変えられる変数なんだ、ということを忘れないでおくことが重要だと思いました。 人は1回言われたぐらいじゃ忘れる 部下は、「仕事ができない上司」よりも、「何を考えているのかわかりにくい上司」のほうが動きづらいです。 たとえ大事なことでも、1回伝えただけでは、日々の情報のなかに埋もれてしまいます。   指示待ち部下がいなくなる「リーダーの伝える技術」 重要なのは、思考プロセスの中でプロジェクトの目的に立ち返って考えるという行為を繰り返させることで、一種の判断基準になるよう仕向けていく、ということです。 部下から質問が来た時、常のプロジェクトの目的に立ち返らせる質問を返す。こうすることで、プロジェクトチームのメンバーはやがて、判断に迷う際にいつもプロジェクトの目的に立ち返るようになります。 山口 周『外資系コンサルが教えるプロジェクトマネジメント』 私自身忘れっぽい性格で、「前言ったじゃん」と言われてぐぬぬとなることも多々あります。反対に自分が言ったことを相手が忘れている時、コミュニケーションは個人ではなく2者間の問題なので、忘れないように伝えきれていなかった自分にも責任はあると考えています。 共有したい想いや考えは、ことあるごとに口にするぐらいがちょうど良いようです。 上機嫌なリーダーと不機嫌なリーダー エドモンドソンは、心理的安全性を「率直に発言したり懸念や疑問やアイデアを話したりすることによる対人リスクを、人々が安心して取れる環境のこと」と定義している。 「心理的安全性」はなぜ混乱を招き続けるのか | Q by Livesense チームの中で流通する情報の量が減ると、必ずといっていいほど、プロジェクトは危険な状況に陥ります。 (中略) 情報を伝えた相手がどのように反応するか、情報の送り手側が判断できない場合、情報の流通量は低下することになります。 (中略) 結局のところ、上機嫌なリーダーが率いるチームではメンバー相互間、あるいはメンバーとリーダーとの間での情報量が増加するのです。 山口 周『外資系コンサルが教えるプロジェクトマネジメント』 たくさんの後輩やスタッフと仕事をしてきた経験から、「みんなのモチベーションを上げる方法」にかぎって言えば、次のひと言に要約できる。リーダーがだれより本気で楽しそうに働くこと。これに勝る育成法はない。 佐久間 宣行『佐久間宣行のずるい仕事術——僕はこうして会社で消耗せずにやりたいことをやってきた』 シンプルに、上司が不機嫌だと、気まずいし気を遣うのでメンバーは仕事に集中できません(少なくとも私は)。なので基本は上機嫌に、なにを言ってもOKの空気を作ることを心掛けています。 とはいえ、常に本心から上機嫌かと言われるとそうではありませんし、不機嫌にもかかわらず上機嫌を演じていれば心は消耗します。あなたの周りでキツそうなリーダー・マネージャーを見かけたらそっとねぎらってあげてください。 おわりに いかがでしたでしょうか?少しでも皆さんのお仕事に役立つ要素があればうれしいです。 ちなみにKTCにおけるリーダーは、メンバーより偉いとかスキルが高いとかではなく、ただの機能・役割です。組織・事業状況に応じた最適なフォーメーションのためつけ外しも発生します。裏を返せば、リーダーがただの機能ならメンバーもただの機能だと思います。 結局はチームで仕事をしていくので、メンバーが動けばいいわけでもなく、リーダーが抱え込めばいいわけでもなく、チームとして解決できればOKなのだと思います。 気負わずやっていきましょう!
アバター
Introduction Nice to meet you! I’m Hyuga, a producer in the Mobile App Development Group. Today, I’m thrilled to share my first blog post featuring some exciting and fun content! AI generation is a trending topic right now, and in this post, I’ll be diving into the world of AI music generation. This AI music generator is truly impressive! Simply put, all you need to do is provide a few basic instructions, and the AI will generate professional-quality music for you. What's more, it doesn’t just create the accompaniment; it also adds vocals with natural-sounding pronunciation. And it only takes a few minutes to generate. It’s truly nothing short of magical. Experience the ultimate feeling of being a famous music creator! In this post, I’ll share how I used the AI music generation to create a company song for KINTO Technologies spontaneously! What is "Suno AI"? There are various AI music generation tools out there, but the one I used is called Suno AI , developed by a startup based in the United States. Suno, Inc. was founded by four people who originally worked at an AI startup called Kensho: Michael Shulman, Georg Kucsko, Martin Camacho, and Keenan Freyberg. Suno AI was first released in December 2023 and has been continuously updated since then. The latest stable version, v3.5, was released in May 2024 (as of August 2024). Below, I’ll briefly outline the steps for generating music with Suno AI. Please note that since AI evolves at an incredible pace, these steps may change in the future. Consider this as a reference and make sure to check for the most up-to-date information. Visit the website ( https://suno.com/ ) Go to the Suno’s official website. ![](/assets/blog/authors/hyuga/20240814/sunoai1.png =630x) Create an account Sign up for a free account. You can use Google, Discord, Microsoft, or other supported accounts to register. ![](/assets/blog/authors/hyuga/20240814/sunoai2.png =630x) Go to the song creation section Navigate to the "Create" section.​ Enter a text prompt Input your desired prompts, such as song lyrics, style (genre), and more. ![](/assets/blog/authors/hyuga/20240814/sunoai3.png =630x) Generate the song Suno AI will take your prompts and generate a song based on the information you provided.​ Download the song Once the song is ready, you can download it for further editing or share it as is. Creating an original company song with Suno AI (lol) Now let's take a look at the power of Suno AI! If I’m going to do it, it might as well be something fun, so I decided to make an unnoficial company anthem for KINTO Technologies! Here’s the plan! Ask the managers of KINTO Technologies to share their thoughts on the company’s strengths! Use the key phrases from their feedback to generate lyrics with ChatGPT! Input the generated lyrics into Suno AI and let it produce an amazing company song! That’s the basic idea, but I thought... hmm, something is missing... Oh, I know! I’ll sing it myself!!! (lol) So, the plan is to remove the AI-generated vocals and have me, Hyuga, sing the song myself! (lol) New theme: "Using Suno AI to create an original company song and singing it myself (lol) "! Comments from the managers We received some incredible feedback about the company’s strengths! A strong sense of speed, and the belief that as long as you have passion, anyone can take on challenges and excel! We have many engineers proactive about sharing their insights. A modern development environment. A rapidly growing development team. A love for communication, a curiosity for new technologies, with a blend of experienced and newer engineers. If you step up, you can take on various initiatives; your scope is the world, surrounded by talented people. Constantly taking on new challenges. And so on! These are really great comments! The perfect lyrics generated by ChatGPT! I carefully input the managers' comments into ChatGPT, and it generated some awesome lyrics! Yeah, yeah, Here we go! KINTO Technologies, let’s go! With speed and soul, anyone can try Feel the potential, reach for the sky Proactive share, paving future's way Modern environment, innovation's stage We are KINTO Technologies! Friends who love to communicate, With passion for new tech, we strive Let's walk the KINTO path, where dreams thrive With soul, any wall can be climbed Chasing dreams, in an endless sky KINTO Technologies, let’s Go! Pioneering the future, let's go forward! Urgent request While I was writing this article, I received a DM from the person responsible for translating the Tech Blog: "Hyuga-san, could you create an English version of the song??” Whoa! What a surprise! (lol) Of course, there's no reason to turn down such a great request! Without a second thought, I replied, "I can do it!" (lol) However, I felt that simply recreating the same vibe as the Japanese version wouldn't be as interesting. Since it's the English version, I thought it should have more of a Western music feel—something a bit more stylish. So, I decided to generate the English version with the following genre specifications: Japanese version: A refreshing rock song with a strong sense of speed. English version: A calm yet uplifting jazz song. Now, I can't wait to see how it will turn out! A masterpiece was born! (lol) The moment I listened to the finished song, a tear ran down my cheek... It was simply too perfect! It exceeded all expectations! Since we've created the ultimate original company song, I'd love to share it with you! Please give it a listen! First, here’s the Japanese version! https://www.youtube.com/watch?v=zmv06e8cTFI Next is the English version! https://www.youtube.com/watch?v=tCIkXdUv9NA Summary How was it? Aren't these songs amazing, even better than expected? My goal moving forward is to have this original company song officially recognized as our company anthem! (lol) By all means, I encourage you to try this inspiring experience with AI music generation for yourself. Even with a free account, you can generate up to 10 songs per day. Even with a free account, you can generate up to 10 songs per day. Let's take a step into the world of AI-driven music! Thank you for reading until the end.
アバター
はじめに 第2回「KINTOテクノロジーズ MeetUp!」の事務局レポートです! イベントページはこちら 【第2回】KINTOテクノロジーズ MeetUp! - connpass 前回の記事はこちら はじめての「KINTOテクノロジーズ MeetUp!」が開催されるまで | KINTO Tech Blog | キントテックブログ (kinto-technologies.com) KINTOテクノロジーズ MeetUp!(運営スタッフver) | KINTO Tech Blog | キントテックブログ (kinto-technologies.com) 第1回はオフライン限定のイベントでしたが、今回はオンラインからもご参加いただけるよう準備してみました。第1回で得たナレッジを活かしつつ、ハイブリッド開催という新しい形式にチャレンジした今回の事務局の様子をご紹介します! 〜イベント前タスク〜 軽食手配 **食べ物で物足りない。**がないように手配をすすめました。 前回はイベント後半では用意していたピザが足らず、参加していただいた方々から「ちょっと物足りないな・・・」というご意見を頂いておりました。 今回も当初はピザ前提で、物足りないがないように予算も増やしていたのですが、途中でまい泉さんのミニバーガーを知り(教えてもらい)、そちらを提供することにしました。 ミニバーガーが素晴らしい点は下記になります。 個包装されており手に取りやすい そのため、紙皿等を用意する必要がない 多少余っても持ち帰りできるため廃棄の手間がない 常温前提の商品なので、配達時間の自由度が上がる 熱いピザを喰らうことに対しての想いはあったものの、上記メリットを勘案し、今回はミニバーガーにしました。 合わせて、気軽に口にできるように個包装のお菓子も用意しました。 結果は、参加いただいた方に数多く取っていただき、また食べ物についても多少余る程度で完結できたので、成功だったと考えております! ノベルティ手配 **「いかにもって帰りたい、使いたいと思うノベルティを提供するか」**ここを目指して調達から取り掛かりました。 いいイベント+いいノベルティはノベルティを見ただけで当時の楽しい思い出がよみがえると思っています。 ノベルティ選定では実用性に加えて、思わず欲しいと思うか?の視点で選び、当日の配布ではどう配布すればもって帰るだろうかの視点で検討しました。 当日全ノベルティの帰宅は叶いませんでしたが、お持ち帰り頂いた皆さまにはふとした瞬間にノベルティを見て思い出して頂ければ幸いです。 タイムテーブル設計 イベントの進行や流れを決めてしまう重要なファクターであるタイムテーブル設計を任され責任重大でプレッシャーに押しつぶされながら設計を進めておりました。。 そんな中で設計をするにあたって一番意識したことは、「 参加していただいた方を退屈させずにイベントを楽しんでいただけるか 」という点です。 前回MeetUpでのタイムテーブルでは、休憩時間が無かったり、参加者の方との全体写真を撮る時間などを考慮に入れておらず、反省点として挙げられていました。 今回は上記の反省点を踏まえ休憩時間や写真撮影の時間を確保しました。また、前回にはなかったLightning Talkをセッションに含めることによって勢いを出し退屈しないような進行としました。 実際に概ねタイムテーブル通りに進行し滞りなく完了し,会も盛り上がって完了し胸をなでおろしております。 ただ、直前にタイムテーブルの設計を変更しており、事前の考慮が足りなかったと反省しております。 資料準備 資料準備は、実際に登壇者の発表者資料のみならず会場にお越しいただいたお客様へのイベントの案内資料や座談会の座席表,イベント前スタート前の資料などMeetupを円滑に進められるような資料を事前に準備しました。前回のイベントの際には、トイレの場所の案内を口頭でしていたり、WifiのQRコードとかあったら良いよねという反省点を踏まえ今回はそれらをカバーできる資料を作成しました。 集客・広報 企業が行うイベントの集客、皆さん苦労されているかと思います。かくいう私達も悩んだ中で、これはやってよかったなという以下2つをお伝えしたいと思います。 広報文章に登壇内で登場する製品名を含めた 情シスSlackに投稿した 広報文章に登壇内で登場する製品名を含めた Xでのイベント開催告知文章に、最初はconnpass公開・イベント開催前の2回を予定していました。しかし、当初思ったよりも次の大切な2要素が伸びないという課題が出てしまいました。 connpassイベントページの閲覧数 イベント参加人数 この対策として、イベントについて知ってもらう機会を増やすという目的で「Xポストを追加する」ことを考えました。しかし、ただイベントを行うことをポストしても目に留まらずイベントの認知度が低いままかもしれません。そこで、次のように思考をドリルダウンしてポスト文章を作成することにしました。 自分が読み手の時、どのようなツイートであれば目に留まる? ↓ 自分が気になる製品名があると見るかも? ↓ つらかったことを入れると、より想像しやすいかも… 等々、単純なドリルダウンではありますが以上のことから「登壇内容で登場する製品名を含め、さらにそれで何をしたのか」まで含めて文章を作成してみました。 ということで、実際に投稿した文章は以下のようなものです。 これらの投稿をした結果、connpassへのアクセス数は増え、さらに参加者も増えるという悩んでいた課題解決をすることができました!いや~本当に良かった!! 情シスSlackに投稿する イベント当日、現地参加された方何名かに「このイベントをなにで知りましたか?」を質問したところその全員が「情シスSlack」と回答いただきました。圧倒的影響力、すごい…!! というわけで、私も普段からお世話になっている情シスSlackですが情シスやコーポレートITに関わるイベントを開催する場合は #share-event で告知をされるとよいと思います。 今回のイベント自体も他社のコーポレートITとつながる目的もあり、情シスSlackも含めて情シス仲間の輪、本当にあったかいなと思います。今回参加いただいた方、connpassやXで当社を知っていただいた方、これからもどうぞよろしくお願いします! タダ飯客対策 実は以前、KTCが主催した別のイベントに「おそらくご飯目的だろうな」という方が参加されたことがありました。ひたすらお酒を飲み続ける、話しかけようとすると嫌な表情を見せる、など下手をすると暴れかねない雰囲気でした。この経験を通じて、対応するための体制やスタンスをしっかりと整備しようと思いました。 他の皆さんはどうやっているんだろう?というのを調べた結果、対応方針として以下がヒットしまして、参考にさせていただきました! 不審なのが入ってきて騒ぎになった時点で事案確定なので、そうなる前に不審者を弾く 不審なのが入ってきた(入ってこようとした)場合でも、正当な参加者への影響を最小限にできるよう、速やかに対処に移れるよう準備する 懇親会付きのイベントに出没する迷惑な人対策の一考察|wakatono (note.com) 不審者の参加を予防しつつ、排除の手段と根拠を用意しておく、という考え方に基づいて準備しました。 会場となるKINTOテクノロジーズ室町オフィス(コレド室町2)のビル側に相談したところ、「一報もらえれば最悪取り押さえるところまでこちらで対応する」という心強い言葉を頂けたので、その連絡先はイベントスタッフで共有し、何かあれば通報できる状態を作りました。connpassのイベント情報ページには、迷惑行為を禁止することと、該当した場合に退出いただく場合があることを明記しました。 幸いにも今回は懸念していたような事態は起きませんでしたが、何かあった時の動き方を事前に決めておけると、余計な心配なくイベント進行に専念できて良いと思います。 〜イベント中タスク〜 会場設営 前回も実施しており、基本は前回同様で進めました。 ですが、今回は「配信」もあったため、その機材の場所も確保しておりました。 場所については、所有するケーブル類の長さの制限があるので、その制限内で、いかに参加いただいた方がゆっくり見るスペースを確保するか。を考えた上で対応しました。 基本レイアウトは事前に用意していたのですが、当日は参加人数に応じた変更が入り、多少ドタバタしましたが、結果としては参加いただいた方がゆっくり過ごせるスペースが確保できたと考えております。 ですが、座談会のレイアウト図面修正が間に合わず、そのときになって多少混乱が生じてしまいました。。。無念。。 機材設営・接続・コントロール 手持ちの機材には限りがあるので、それらを使ってどのように配信していくかを考えました。オフライン・オンラインのハイブリッド開催になるため、色々な面でケアが必要でした。 結果的に特にトラブルもなく、オンラインの皆さんにも勉強会をお届けすることができ、とても良いイベントとなりました! カメラ ワイプの小さな画面への投影となるので、登壇者のお顔がきちんと見えるように画角を調整しました。配信コントロールに使用する「スイッチャー」に接続する必要があり、配信オペレーション卓(通称「オペ卓」)の近くに設置しつつも、登壇者の正面になるように配置を調整しました。基本的には固定!登壇者の入れ替わり時に身長の差による位置ズレを直せば完璧です。 音響 事前にマイクチェック、会場への音量チェック、オンラインへの音量チェックを行いました。ハイブリッド開催になるためここのチェックは超重要!オンライン・オフラインそれぞれに音割れや、逆に小さすぎて聞こえないなんてことがないように入念にチェックしました。また、発表中は常にオンラインでの音声をチェックし、万が一音飛びや突然の音割れ等があった際にすぐ対応できるように気をつけました。 配信 今回は配信実績のある比較的シンプルな構成で配信することに決まりました。できるだけ複雑なことをせずに、配信中に万が一のトラブルが少なくなるようにしました。 配信映像と同じ映像をプロジェクターに投影し、登壇者からも、発表資料やワイプがどのように配信されているか確認することができるようにしました。また、登壇者の資料とワイプが被らないように、発表中はワイプが四隅を縦横無尽に駆け回りました(なお人力)。 登壇していただいた皆さん各自の使い慣れたPCで発表することができ、映像も乱れることなく運営できたので、こちらも文句なし!問題なく進めることができました。 マイク、資料投影、ワイプ、確認OK! 受付・アテンド・退場時対応 事前準備 前回開催の際も同じメンバーでしたので、大体のナレッジはある状態でしたがコンフルでタスクを洗い出し、各々の担当を振り分けて準備をしました。 イベント当日の役割もこの時点で決めてしまうことで当日にバタバタすることを防ぎました。 いよいよ本番当日 募集を締め切った後すぐに入館証の準備を開始。ビルの入館証のみだと見た目が寂しいのでオリジナルの表紙を用意してもらい、アレンジ! 他のイベントでも取り入れてますが、このひと手間がとっても大事だな。と思ってます^^ 受付スタート 帰宅時間帯ということもあるのでイベント参加者ではない方の出入りも多く、参加者が受付を認識してもらえるようにそれらしき方がいれば大きな声で呼びかけをしました。(たまに全然関係ない方にも声をかけてました、、笑) 今回は受付から16階の会場までのアテンド要員は配置してなかったので、受付から参加者が会場に向かう際は「〇人行きま~す!」のような形で会場の運営チームへSlackで連携を都度しておりました。思い付きでやってたのですが結果的に会場で参加者を待ってくれているメンバーには好評だったので、やって良かった!と思いました! 退場対応 正直に話しますと、、あまり細かくチームでは話しておらず、場面で対応できる人で対応しておりました。お帰りになられる方とエレベーターでイベントの感想を聞く時間が個人的にめっちゃくちゃ楽しかったです!!「楽しかったです~」、「また開催してください~」やら「KINTOテクノロジーズさん、めちゃ楽しそうですね!」等、少し興奮ぎみでお答えいただくことが多く、そんなお声を生で聞けるのはとても得した気分でした♬ 司会進行 ノリと切り替えでなんとかしました!8割くらいは本当にこれで何とかしました。 残りの2割、気を付けた点を共有します。 発表者の発表前にコメントを入れる 現地開催だけだと、発表開始前の人の入れ替え等も目に見えることなのでそこまで気にならないかと思いますが今回はハイブリッド開催。オンラインの方には、カメラで撮影されていること以外は見えないため、変な間が生まれてしまいます。そこで、昭和歌謡が始まる前の前口上のように発表内容についての簡単な紹介や前の発表者の発表内容のまとめ等を入れることでスムーズな入れ替えを目指しました。 現地とオフライン区分けして話すのは最初と締めの挨拶だけ これは時間上の問題が一番大きいですが現地、オンラインに向けたコメントは行わないようにしました。オンラインで見ているとき、現地でしかわからないネタばかり話されてもなんだか置いてけぼりになってイベントに参加している没入感がなくなってしまいますよね。それを意識しました。 オンライン終了後はマイクを使ってあまり話さなかった ここは↑とは逆で、現地感を出したい!という目的で行いました。バンドやアイドルのライブ現場の最後、挨拶だけ地声なことあると思います。そしてそれをされたとき、ライブだ~!!!と思うことがあると思います。あれです。 人前で話すのはなかなかハードルが高いかな、と思っていましたが何とか役目を果たすことができて安心しています。これを読んでいる皆さんもぜひ挑戦してみてはいかがでしょうか。 軽食準備・提供 前回ははじめての社外向けイベントでしたが、この数か月でイベントがたくさん開催されたので、手配も慣れたものです! 今回はまい泉のカツサンドを用意しました🐷 どれもおいしそう・・のきんちゃん どれにしようかな・・のたけちゃん ピザとくらべて手軽に食べられるのがいいですね また、受付済んだら持ってってね~ 食べながらmeet up!参加していいよ~ のスタイルでしたが、おなかがすいている時間(恐らく)なので嬉しいポイントになったのでは! 個数多めに頼みましたがヒレカツサンドがいちばんになくなりました( ..)φメモ 次回の個数調整がんばる。 お酒はビールが人気で3種類用意していますが、他社のイベントに参加したら10種類くらいあって感動・・! 次回は種類増やしてみよう。 カイゼンカイゼン。 〜イベント後タスク〜 ふりかえり チーム活動が終わった時にやることと言えば、「ふりかえり(Retrospective)」ですよね! ふりかえりとは 「ふりかえり」は、活動の節目や終了後に実施する、 「うまくいったこと」や、「うまくいかなかったこと」、「起きたこと」をふりかえる そこから「次に向けた具体的なアクションプラン」について皆で考える ための場となっています。 KINTOテクノロジーズでは、ふりかえりが日常的なイベントとして根付いており、部署を問わず「ふりかえり」や「レトロ」の呼称で普段の会話に良く登場しています。 今回のふりかえり 今回われわれが実施したふりかえりは、「MeetUpを終えた時点での、運営スタッフによるふりかえり」です。 「ふりかえり」に具体的な決まりやルールはありませんが、われわれは良く「KPT(Keep/Problem/Try)」のフレームワークを利用したふりかえりを実施しています。 今回のふりかえりも、KPTを利用しました。イベント運営においては多少なりとも「うまくいかなかった事」や「もっと良くできた事」が考えられるため、そういった「カイゼンのタネ」を拾えるように、KPTを利用する事となりました。 「ふりかえり」の準備 参加するメンバーが多く、皆さんの時間も限られている中で「最高のふりかえり体験」を得るためには、事前準備(ダンドリ)が重要になります。今回は、以下の流れで事前準備をおこないました 皆が自由に書き込めるWikiを作成する(われわれはWikiツールとしてConfluenceを常用しています) ページ内に、KPTフレームワークに沿った書き込みできる枠を準備する(今回は「事前準備」「集客・広報」「当日設営」といった形で、事前にカテゴリごとの枠を作り記入しやすくしていました) ふりかえりに参加するメンバーへ、「当日までに気付いたことを書き込んでね」とSlackでアナウンス ファシリテーターは、ふりかえりの直前にページを確認し、皆さんの記入した内容を咀嚼しながら会の進行のダンドリを考えておく(全体の時間枠に対して、何をどの程度ディスカッションするか?等) 当日のふりかえり ~アイスブレイクからのグランドルール~ さて、ふりかえりが始まりました! 時間がたっぷりあれば、アイスブレイクを兼ねて参加者の皆さんに「今回のイベントで最高だった瞬間」を発言してもらって場を盛り上げるところだったのですが、今回は時間が限られていた事もあり、運営リーダーのポジティブ総評でキックオフさせてもらいました。 アイスブレイクの次は、今回を最高の場にするために、最初に「意見が出やすくなるおまじない(グランドルール)」の説明をします。以下のルールを皆の前で読み上げました。 積極的に会話に参加しましょう! どんなにくだらないことでも、気兼ねなく発言してね! 一人で話しすぎない! 人の発言をさえぎらないこと 話してない人にも思いがある事を理解すること 我々は最善を尽くした事を疑わない! 原因の追究はしても、責任の追及はしない!! 「問題」VS「私たち」の構図を取ること 失敗できたことを良しとして、学びを得られた事を祝いましょう! グランドルールを読み上げたら、いよいよふりかえりの本番です。 当日のふりかえり ~KPTの実施~ 事前に参加者の皆さんが色々と記入してくださったので、進行のイメージが付きやすい状態で始める事ができました。 まずは、上から順に、記載してくださった内容を確認していきます。 ここで重要なのが「記載してくださった方に、内容を読み上げてもらう」です。ファシリテーターが読み上げてしまっても良いのですが、それでは「書いた言葉に込められた想い」まで皆に共有する事はできません。そのため、なるべく「ご自身での読み上げ」を重要視しています。 記載内容の読み上げ その内容を反芻する形で整理し、他の方からの感想や付け足し、フィードバックを受け取る 受け取った内容を整理し、Wikiに追記して整理 アクション提案が出た場合は、皆の合意を得ながらTry項目として整理する といった流れで、時間の許す限りディスカッションが盛り上がる状態を続けます。 最終的には、以下のような形でアクションアイテム(Try)を含めた形でWikiが充実し、最後に皆でイベントの完了を祝いました! 皆で作り上げたふりかえり ふりかえりのふりかえり 今回のふりかえりにより、「次に同じようなイベントを実施する際のアクションアイテム」が作り上げられて、未来の自分たちに託すことができました。 同時に、参加された皆さんの「関係の質」や「思考の質」を向上するための場としても活用できたのでは、と考えます。 このような協調型のイベントを繰り返すことで、「行動の質」や「結果の質」に変化が生まれ、ますます組織として強くなっていく「成功の循環」が回っていく事に期待をしています。 おわりに 各担当者が自律して活動してくれた結果、前回に引き続き大きなトラブルなく、我々にとっても楽しいイベントにすることができたと思います!オフライン・オンラインのハイブリッド開催の実績ができたので、今後も機会があれば今回のナレッジを活用して開催していきます。 今回は、第1回のやり方を踏襲し、省力化しつつ同じ結果を得ることをテーマに活動してみましたが、「標準化」と「創意工夫」のバランスについては考えさせられました。工夫の余地なく「やるだけ」の仕事ならどんどんマニュアル化すればいいです(し、今回もそのつもりでした)が、受付にしても軽食手配にしても、KINTOテクノロジーズが開催するイベントの回を重ねるたびに、創意工夫でどんどん魅力的・効率的になっていっているのを目の当たりにしています。下手にまとめようとするよりも、担当者の自主性と工夫に任せて育てていってもらえるような環境が大切なのかも?と考えさせられました。 そして、「できるだけ多くの人に届けたい」という思いでオンライン開催も取り入れましたが、やはり現地の懇親会で他社の情シスの皆さんと課題を共有する機会は非常に魅力的だと感じました。今後もハイブリッドでやるのか、またオフラインのみのイベントにしてみるのか、色々と悩みつつ引き続き開催していきたいと思います!
アバター
Introduction My name is K. Kane and I am an assistant manager in the Mobile App Development Group. I typically work as a project leader (PL) for the My Route project (PJ) and as an iOS engineer. Recently, we released the My Route iOS app, which can now be launched as a navigation app from the pre-installed Maps app on the iPhone (referred to as the standard Maps). Although many navigation apps offer route settings, there appear to be surprisingly few Japanese websites that explain how to configure this feature. For that reason, I decided to write this article. Differences from ShareExtension On iOS, when you want to share data with other apps, it is common to use ShareExtension. When using ShareExtension to share data that matches the format supported by another app, that app will appear as an option in the share menu's list of available apps. In the case of the standard Maps app, the results of location searches are provided as universal links that begin with https://maps.app/com/? . These links are shared through the regular share menu, allowing you to receive them by setting up the appropriate ShareExtension. On the other hand, when sharing a route, since it involves sharing both the starting point and destination data, the normal sharing menu will not appear. Instead, the route can only be shared with apps configured as route apps. How to set up as a route app To set up a route app, you simply need to add a Capability. There's no need to implement complex logic like ShareExtension (though you will need to handle the processing of the received data, which will be explained later). In the project, select the app's TARGETS, then choose "Signing & Capabilities", and click "+Capability" in the upper left. *Xcode version in the image: 15.4 In the separate window that appears, type “Maps”. From the two options displayed, double-click "iOS, watchOS" to add it. "Maps" will be added under "Signing & Capabilities". Then, select the transportation you want to use for receiving routes. By configuring these settings, your app will appear in the list of route apps. After making these changes, the item shown in the image below will be added to Info.plist. How to extract the data you receive Next, I will explain how to receive the data that is passed when the app is launched through the list of route apps. Data reception is handled by the SceneDelegate class. If the SceneDelegate class does not exist in your project, please add it. I won’t go into detail here, but you can specify the added class in the “Delegate Class Name” in Info.plist. For a SwiftUI app, you can set it within the AppDelegate class specified using @UIApplicationDelegateAdaptor in the struct that inherits from App. import UIKit import MapKit class SceneDelegate: UIResponder, UIWindowSceneDelegate { func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { if let context = connectionOptions.urlContexts.first, MKDirections.Request.isDirectionsRequest(context.url) { printMKDirectionsData(context.url) } } func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) { if let url = URLContexts.first?.url, MKDirections.Request.isDirectionsRequest(url) { printMKDirectionsData(url) } } private func printMKDirectionsData(_ url: URL?) { guard let url else { return } let request = MKDirections.Request(contentsOf: url) if let source = request.source, let destination = request.destination { if !source.isCurrentLocation { print("dep_lat: " + String(source.placemark.coordinate.latitude)) print("dep_lng: " + String(source.placemark.coordinate.longitude)) print("dep_name: " + (source.name ?? "")) } if !destination.isCurrentLocation { print("des_lat: " + String(destination.placemark.coordinate.latitude)) print("des_lng: " + String(destination.placemark.coordinate.longitude)) print("des_name: " + (destination.name ?? "")) } } } } Data is passed from the standard Maps as an instance of the MKDirections.Request class. For details on the MKDirections.Request class, please refer to this link . While the class has properties like departureDate, these do not seem to be set in the data passed from the standard Maps. For simplicity, we are extracting the data within the SceneDelegate class this time. However, in a real-world scenario, the data would be passed to the screen that needs it and processed there. Additional action for Apple review When submitting your app for review by Apple as a routing app, you must include a GeoJSON file in the "Routing App Coverage File" section under the Distribution tab in App Store Connect. This file should specify the geographical areas that your app covers. For more information about GeoJSON files, click the ? button next to the "Routing App Coverage File" and check the linked page that appears. When you upload a GeoJSON file in the correct format, the file name will be displayed as shown below. Once it appears in this state, you can proceed with submitting it for review. Released safely...? In fact, after configuring the settings mentioned above, the My Route iOS app was submitted for review and successfully approved. As a result, we released the app in this state. The published app was successfully linked, and with that, we can conclude by saying, "This is how to receive data as a routing app.” However, the story doesn’t end there—there’s actually more to it Soon after, an unfamiliar warning message started appearing… Just recently, when I uploaded the app to App Store Connect to comply with the Privacy Manifests, I received an email containing a warning message that was completely unrelated to the Privacy Manifests! Below is the actual warning email that was sent: As stated in each warning, it was necessary to add the following settings to Info.plist. Set the Handler rank in MKDirectionsRequest settings Enable “Supports opening documents in place” After making these settings, the Info.plist content will look like the image below. After applying these settings, the warning message disappeared. Conclusion The My Route iOS app receives the starting point and destination from the standard Maps app and then searches for the corresponding route within the My Route app. It appears that other navigation apps function in a similar way, but even apps without built-in map features can be registered as routing apps. By linking with the standard Maps app, they can utilize departure and destination data, potentially broadening the app's capabilities and use cases. I hope this article proves helpful to anyone considering implementing or using route apps.
アバター
👋自己紹介 こんにちは、ふりかえりマイスターを目指している佐々木です。 KINTOテクノロジーズでプロジェクトマネージャ(PjM)をしています。 前職ではPLをしており、アジャイル開発(スクラムやカンバン)にチームで取り組んでいました。 私はふりかえりがとても好きです。前職では、ふりかえりを通じてミニマムな設計ドキュメントを整えたり、CIを推進したり、はたまた新人さんとシニアの間のわだかまりを取り除いたり・・・と、開発にもメンタルケアにも使えてとても助けられました。 はじめに 入社して1年経ち、プロジェクトマネージャとしていくつかリリースとふりかえりを行いました。 イテレーティブな開発サイクルの中で行うスクラムチームのふりかえりと違い、プロジェクトのふりかえりはメンバが毎回違う有期での開発・リリースになります。スプリント単位のふりかえりに比べ、実施タイミング、観点やゴール設定が難しいなと感じました。 また、チームメンバほどの関係性が構築できていない状況でいかに本音を引き出すか、参加メンバにフィットするふりかえりで使用するツールは?フレームワークは?面着でやるの?といった運用面でも気を遣う必要がありました。 今回は、複数のチームが横断で取り組む「プロジェクトのふりかえり」に採用した手法とその理由、および具体的なふりかえり手順と得られた結果について言語化していきたいと思います 🙌 目次 プロジェクトのふりかえり ふりかえりのカタ ふりかえりの設計 ふりかえりの実践 ::: message この記事はこんな人に役立ちます プロジェクトのふりかえりについて課題、解決方法が知りたい ふりかえりの基本形(カタ)について知りたい ふりかえりのカタの使い方・応用を知りたい ::: プロジェクトのふりかえり KINTOテクノロジーズにおける“プロジェクト” KINTOテクノロジーズには、お客さん向けのフロントを担うプロダクト、車を注文する販売店向けのプロダクト、カスタマーセンターを支えるプロダクトなどなど、エンドユーザー向け・バックオフィス向けの様々なプロダクトが存在しています。 契約プラン に関わる変更、 取り扱いブランドの追加 など複数のプロダクトが横ぐしで協調してリリースする必要があったり、ステークホルダーが多いような開発の場合、KINTOテクノロジーズでは プロジェクト という単位で開発を進めます。 プロジェクト進行 各プロダクトはスクラムなどチームごとに違ったスタイルで日々の開発を行っています。 プロジェクトとしては大きな工程・マイルストーンを計画し、要求分析、要件定義など固めたうえで、設計・開発工程はアジャイルに作りこみを実施します。 俗にいう、ハイブリッド開発手法(ウォーターフォール+スクラム)の進め方でプロジェクトを進めていきます。 プロダクトとしてはイテレーティブな開発サイクルで日々の開発を行い、大枠はウォーターフォールでマイルストーンごとの品質や期限を担保するイメージです。 引用元: ハイブリッド開発とは何か? “アジャイル型”との違いや推進体制を解説 今回のプロジェクト これまでKINTO ONEの解約金フリープランでは、契約途中で解約をお申込みする際、電話でのお申込みとなっていました。お客様の使い勝手向上のため、こちらをWEBから申し込みできるようにします。 これとは別にバックオフィスのプロダクトでは、ハンドで行っていた中途解約業務を半自動化して改善するプロジェクトを1年がかりで行っていました。 WEBから解約申し込みできるようになると受付数が増大するため、自動化がセットで必要です。 今回はこの2つのテーマの開発を1つのプロジェクトとしてリリースします。 期間は4か月で、直接関係するチームリーダーや企画メンバ(KINTOメンバ)は総勢20名程度です。 https://corp.kinto-jp.com/news/service_20240219/ 🧙‍♂️ふりかえりの「カタ」 より効果的なふりかえりにするために、私の大好きな本「アジャイルレトロスペクティブズ」で推奨されているやり方を参考にしたいと思います。以下の5つのセクションにわけてふりかえりを進めていきます。 1. 場を設定する アイスブレイク、グランドルール読み合わせなどで意見を出しやすくする 2. データを収集する ふりかえりのもととなる情報を読み込む、ホワイトボードに付箋を貼る 3. アイディアを出す ブレーンストーミングなどのエクササイズでアイディアを言語化する 4. 何をすべきか決定する(アクションアイテムの決定) ドット投票など用いて、参加者で何のアクションに取り組むべきかを決定する 5. クロージング アクションアイテムなどについてまとめ、感謝を伝えてふりかえりを終了する ざっくり内容 ざっくり要約すると、話しやすくしてホワイトボードなどで感じたことを伝えてもらい、それらについてディスカッションしてアクション・改善につなげます。これをなぞる形で当日は進行します。 この”カタ”は主にアジェンダ作成に用いられますが、この本には「開発期間に応じた時間設定」など、ふりかえりに関する細かいTipsなどが載ってるので、要所要所で参考にしながらふりかえりの設計を進めていきます。 興味を持った方は、ぜひアジャイルレトロスペクティブズを読んでみてください! アジャイルレトロスペクティブズ(Amazon) 🏗️ふりかえりの設計 ふりかえりの内容はプロジェクトの性質や参加メンバなどコンテクストにも応じて変えていく必要があります。 私のふりかえりの設計過程を順番に言語化してみます。 前提・制約の確認 目標の設定 進め方の設計・検討 1.前提・制約 まずはふりかえりの前提・制約を整理します。本プロジェクトにはさまざまなチームが参加しています。進行が滞らないようにポイントを整理します。 チームごとのふりかえり文化の違い ふりかえりは定期的に実施してるチーム、そうでないチームがある KPTはやったことがあるがほかのやり方はしらない人もいる ツールの習熟度の違い ホワイトボードを使い慣れていないチームもある Miroを知らない人もいる コンフルなど新たに権限追加が必要な人もいる 参加者の拠点の違い 東京、名古屋、大阪 その他 有期のチーム横断開発となり、プロジェクト後は解散する 💭思ったこと ふりかえり文化の違い 会議予定など見ると、スプリントふりかえりを実施してるチーム/実施していないチーム等、それぞれの文化の違いが顕著でした。以前のプロジェクトではKPTを実施したそうなので、KPTに関しては皆さんご存じのようです。 ツール選定 複数のチームが参加するため、チームによってはツールの習熟度に違いがありました。よくわからないまま参加する会議は心地いいものではありませんから、全員がツールの操作で滞ったりせず、前向きに参加できるようにしたいです。 会議時間の設定 会社全体はわからないですが、前職から比べると私の関係者は会議時間が短めな気がします。 長くて30分で、1時間を超える会議は長いと感じる方が多そうです。 また、チームによっては、プロジェクトのふりかえり自体に積極的じゃないメンバがいる可能性もあるかもしれません(チームでやってるし…的な)。これらを考慮するとなるべく必要最低限でハレーションがおきにくい時間に設定したいです。 実施場所(オンサイト・オフサイト) 拠点が違う方もいるので、面着でやるのかオンラインでやるのかも重要です。面着でやる場合はオンラインのメンバが疎外感を感じないようにする必要があります。 社内のアジャイルチャンネルで質問したところ、皆さんが過去に経験したやり方とその際の注意点を共有してくれました(ハイブリッド開催、Jamboard利用などなど。皆さんありがとう!) 心理的安全性 また、心理的な面の不安もあります。チーム横断で参加するメンバ同士はコミュニケーションが醸成されていないため、本音を引き出すのが難しいです。心理的な安全性をなるべく確保したいと考えました。 さてどうやってすすめよう。 2.目標の設定 自分がPjM・ファシリテーターとしてこのふりかえりで達成したい目標を設定します。 💭思ったこと 私は、ふりかえりを通じて組織を超えた改善につなげたり、サービス自体の横断的な機能改善ができたらいいなと考えていますが、まだまだ関係者との信頼関係を作り始めたところなのでむやみに風呂敷を広げにくいなと思いました。 そこで、 今回はプロジェクトとしての成果の共有や、自分でコントロール可能な範囲(プロジェクト運営)の課題解決にフォーカスしたふりかえり を行っていこうと思います。ディスカッションの中で出た組織横断的な課題・問題点については、いつかの改善のため共通認識とするのにとどめようと思います。 他、組織全体としてインタラクティブなふりかえりに慣れていきたいので、ホワイトボードは絶対に使いたいと思いました。 メインミッション プロジェクトの成果を確認・共有する プロジェクト運営としての改善タスクを確認する サブミッション 主語の大きい課題に対して企画・開発間で共通認識を得る ホワイトボードを使ったインタラクティブなふりかえりの導入 3.進め方の設計・検討 前述の制約を考慮して目標を達成するため、当日のふりかえりの進め方を考えてみます。 1.場を設定する 心理的安全性 ふりかえりは多数のメンバが参加するため、ある程度決まった流れにそって進めていきます。 それらの説明と、この場の心理的安全性を確保するために冒頭にグランドルールを宣言します。 少しずつ伝えたいので、さりげなくホワイトボード上の目に入るところにも置いておきます。 時間設定 アジャイルレトロスペクティブを参考にすると、リリースふりかえりの場合は1日~4日近くの時間を使ってもよいとされていますが、前述の通り1時間だと長く感じる人が多そうですので、めちゃ圧縮して45分程度に設定したいと思います。 2.3 所要時間の決定 レトロスペクティブにはどれだけ時間をかけるか?それは、場合によりけりである。 ~ 中略 ~ 1週間のイテレーションを行うチームなら、1時間のレトロスペクティブで十分だろう。 30日間のイテレーションを行うチームなら、半日で十分だ。 時間を短くすると、いい加減な結果になる。 (リリースおよびプロジェクトのレトロスペクティブは少なくとも1日はかかる。場合によっては4日かかることもある。) 引用:アジャイルレトロスペクティブ 2章より 2.データを収集する ふりかえり文化の違い / ツール選定 KPTはKTC内でもポピュラーな手法であったこと、業務上の基本ツールとしてコンフルを使うことが多いので、アカウントなど準備の手間をクリアしてる コンフルのホワイトボード を使います。 データを収集するエクササイズとしては以下の2つを使用します。 タイムラインについては聞きなれない方もいるでしょうか。それぞれの内容について、詳しく説明します。 タイムライン KPT タイムライン 準備無しでふりかえりをした場合、直近で印象深いことに注目が向くことが多いのではないでしょうか。 タイムラインは過去にあったことを思い出すためのエクササイズです。 期間内で起きた事実やその時の感情を時系列に並べる ことで当時の状況を思い出せるようにしておきます。 引用:タイムラインとは https://anablava.medium.com/a-timeline-retrospective-easy-guide-6385fce0affd 今回、タイムラインはがっつり記載してもらうのではなく、あらかじめPjMが書いたものをさりげなく置いておき、思うところがあれば追加で付箋を貼ってもらうことにします。ドットも使わないことにします。 ※タイムラインを置いておくのは、アジャイルチャンネルのきんちゃんのアイディアを拝借しました。 リリースレトロなので、本当はもっとしっかり感情の情報を集めたいところではありますが、45分という限られた時間なのであっさりめとします。 KPT やったことがある人が多い&仮にやったことが無い人がいてもわかりやすいので、KPTで実施します。 学生時代のオリエンや企業の集合研修などなど、どこかで触れた方が多いのではないでしょうか。 Keep、Problem、Tryを上げていき、それぞれのトピックについて改善点を探すフレームワークです。それぞれの頭文字をとってKTP(ケプト/ケーピーティ)といいます。私はケプトと呼びます。 K eep / できたこと・継続したいこと P roblem / 課題・問題点・改善したいこと T ry / 挑戦したいこと KPTの注意点 ふりかえりの手法として最もメジャーなKPTですが、主語が最初から"問題点"になるのでカドが立ちやすく、ちょっとしたモヤモヤが出しづらかったりします。 また、個人的な意見がハナっから問題点として扱われてしまったりします。 私は、KPTのファシリをする際にはいつもに増して客観的&言葉遣いに気を付けて実施しています。 ※完全に個人的な好みですが、デモなど自分たちの成果のお披露目後に使うなどがおすすめです! (プロダクト機能観点の問題点が開発メンバ自身から出やすくなるため) 4.アイディアを出す KPTのTryでアイディアを出してもらいます。 忙しい人が多いので、事前に書いてもOKとします。 事前に書いておいてもらう場合、KPとTryの関連性が薄れる可能性がありますのでふりかえり当日にも確認する時間を設けてTryの内容を深堀するようにします。 5.何をすべきか決定する(アクションアイテムの決定) スクラムチームのふりかえりではドット投票などでコミットメントを得ますが、今回は限られた時間(45分)で行いますので、ファシリテーションしながらアクションアイテムを選別します。 プロジェクト範囲外の組織課題なども議題に上がるかもしれません。 主語の大きい課題は受け止めつつ、プロジェクト運営としての改善を具体化するように心の準備をします。 🎯ふりかえりの実践 上記の設計を経て、実際におこなったふりかえりをまとめます。 ※ ちょっと細かいですが、新たにふりかえりを導入したい方向けに心がけたことも書いておきます。 1. ふりかえり開催の案内・フォロー(事前準備) 今回のプロジェクト成果の収集についてのお願い ホワイトボードを使う旨の周知、お断り ※ 個別に席で話せたり、会議終わりに頭出しできるならしておくようにします。 忙しい人、事前に書きたい人向けにホワイトボードを事前展開しておきます。 2. 場の設定 グランドルールをあっさり読みあげて話しやすい雰囲気を作ります。 ※ 「ちくちく言葉はだめで~す!」のように、やんわりと注意を呼びかけます。 グランドルール ここは、フィードバックを共有できる安全なスペースです。 - チクチク言葉はつかわない - 共有してもよいと思うものをすべて共有する - 非難ではなく改善に焦点を合わせる - 誰かの付箋を真似したり、付け足すことを歓迎します 3. データの収集 各部署から、プロジェクトとしての具体的な成果を発表していただきました。 電話で受け付けていた解約がWebで申し込めるようになったため、相当な工数削減になったようです! a. タイムラインの記載 モヤモヤしたことやその時の感情について、タイムライン上に任意で記載をお願いしました。 感想としてリリース後も大変だったことがわかります。 こうしたフィードバックはKPTでは出にくいので拾えてよかったです。 b. Keep、Problem、Tryの記載 事前に書いてくれた方、その場で書いた方様々ですがいろいろな意見が集まりました。 忙しい方が多いのでTryまで書いてOKとし、KeepとProblemとセットでTryを書いてもらいました。 ※業務に関わる内容が多いので字をつぶしています 4. アイディアを出す 全ての付箋を簡単によみあげ、あらためてこれまでにあがったKPTの印象を参加者の方に聞きます。 印象を話す中でディスカッションが発生しますので、ファシリしつつ、新しくtryにつながるアイディアがあればふせんにまとめて貼っていきます。 5. アクションアイテムの決定 PjM運営で改善できること・次のプロジェクトで取り組めるようなTryをアクションに落とします。 ※ 業務上見せられるものを抜粋しています 6. クロージング 上記をまとめ、感謝を伝えて解散します。 今回のふりかえりの成果 ふりかえりを通じて以下のような収穫が得られました。 プロジェクトとしての成果 プロジェクトの効果について、関係者で具体的な数値で工数削減を実感できた。 プロジェクトリリースを参加者で喜び合うことができた。 プロジェクトの課題について改善アクションが生まれた。 組織を跨いだ課題(デザイン資料の扱い方など)について共通認識を得ることができた。 その他の成果 結構勇気をだしてホワイトボードを提案したが、すんなり受け入れてもらえた タイムラインでリリース後の苦労などがうかがえ、リリース後運用の安定の重要度を再確認した 上記から、ファシリテーターとして計画していた目標を達成できました 🎉 メインミッション プロジェクトの成果を確認・共有する:達成 プロジェクト運営としての改善タスクを確認する:達成 サブミッション 主語の大きい課題に対して企画・開発間で共通認識を得る:達成 ホワイトボードを使ったインタラクティブなふりかえりの導入:達成 今後に向けて やってみたらホワイトボードやタイムラインなど、皆さんにすんなり受け入れてもらえました。 タイムラインもお披露目できましたので、しばらくしたら4Lなど取り入れてみたいと思います。 時間の制約については、もしかしたら私が遠慮してただけで長めに設定できるかもしれません。 もしくは、リリースのふりかえりだけでなくプロジェクトの節目でふりかえりをとりいれると、時間は45分のままでも程よいタイミングでふりかえりを設定していけるかもしれません。問題が起きたタイミングで改善できるしね! 感想 今回はプロジェクトでのふりかえりについて、私の考えや実施方法をまとめてみました。 同じようにプロジェクトのふりかえりで悩まれてる方がいれば、お力になれると幸いです! ふりかえりはイテレーティブな検査・適応を体現したアジャイルの申し子のような存在です。 みなさんもよいふりかえりライフを!
アバター
Overview We are Mori, Maya S, and Flo from the Global Development Group Business Enhancement Team. This is the second article in a series on KINTO Global Innovation Days, hosted by the Global Development Group. click here for the previous article 👈 In this article, we will report on the three pre-events held over the four days from December 14th to December 19th, as well as the actual Innovation Days. The full schedule of events was as follows: Design Dash Workshop As an organization expands, the tasks of each member are subdivided, and it becomes difficult to see how each member is thinking about how to complete a task if their role is different, even within the same team. To address this issue, we conducted a workshop focused on developing solutions to the challenges faced by the target. In this workshop, participants conducted interviews based on a particular topic, developed a product to solve the interviewee’s problem, and then proposed and demonstrated the solution. This workshop has four objectives: Ideation Rapid Decision Making Discovery Practice Improve Communication We intentionally did not state the objectives at the outset because we wanted participants to discover the learning points for themselves. However, 72% of the participants expressed satisfaction in a survey conducted afterward. Feedback included comments such as, "I gained the ability to identify issues," "I learned how to find solutions within a limited time," "I learned how to design user-centric products," and "I realized the importance of teamwork." This feedback indicates that the objectives were naturally achieved. Communication Workshop As tasks become more subdivided, opportunities for presentations and output also decrease. Some participants rarely had such opportunities in their regular work. While the final pitch of this event provided an opportunity to present, we believed that understanding what others need is a critical skill for the preparation phase. The needs vary depending on the stakeholder’s position, and failing to grasp this can lead to off-target outputs. Therefore, we conducted a communication workshop as part of the preparation, focusing on thoroughly understanding the stakeholder’s positions and situations and identifying their current needs. In this workshop, participants were divided into different roles: copywriters, designers, developers, directors and sales. They shared their concerns within the project and converted them into actionable to-do lists based on their respective roles. This workshop also received a 56% satisfaction rate. Feedback included comments such as, "I learned the importance of sharing essential information," "I learned how to consolidate opinions from people with different focus points and deliver them as a well-integrated deliverable," and "I was able to see the perspectives of different roles." Toyota Way Workshop One of the benefits of holding the Innovation days at KINTO Technologies was the opportunity to learn from the unique practices of the Toyota Group. To this end, we organized a session on the Toyota Production System, which is also the origin of Agile methodologies. While we will omit the specific methods here, the session provided a chance to learn about the Toyota Way and the Toyota Production System, guided by Toyota’s history and the founder’s philosophy. Rather than just receiving information passively, participants were encouraged to share their insights with their teams and make a pledge to uphold the critical mindsets during the following Innovation Days. This also achieved an extremely high level of satisfaction at 72%, with some participants saying, "I was able to understand the Toyota Way not just as words, but also through its history." Innovation Days Building on what we learned in the workshops leading up to the event, the day of Innovation Days has finally arrived. Schedule for the Day 🔻🔻 Opening Ceremony We started with a pre-recorded message from our CEO, followed by a detailed explanation of the rules and an icebreaker activity. Code of Conduct We established the rules by referencing various case studies and emphasized the importance of adhering to the Code of Conduct ✨ Presentation content Presentation time: pitch 5 minutes + Q&A 2 minutes) Approved tools for development Submission method for deliverables Evaluation criteria and points Advice from the organizing team ![code-of-conduct](/assets/blog/authors/M.Mori/20230314/code-of-conduct.png =450x) Icebreaker For the icebreaker, we used Telestration, a drawing-based telephone game. It turned out to be quite challenging for the participants. Ideathon We began by nurturing the initial ideas participants had considered beforehand. Using the Value Proposition Canvas, they analyzed their target, identified the challenges, and explored the value their solutions could offer. Typically, this type of analysis is conducted by product managers (PdMs) in the Global Development G. However, this event provided a rare opportunity for the entire team, including engineers, to participate in the process. Hack Start!!! Now, the actual development started. Each team was assigned to different rooms in the office to start their development work. They had a varied group of members, so some focused not only on coding but also on preparing for the next day's pitch or working on design tasks. The actual development time was about 7.5 hours out of the two days. Within this limited time, each team considered where to prioritize their efforts, how much development to complete, and what key points to emphasize during the Final Pitch. During the two-day Innovation Days, we also had lunch sessions. On Day #1, teams went to lunch together to deepen their ideas. On Day #2, one member from each team joined another team’s lunch to provide feedback on ideas and development content. While these sessions were intended for interaction, the informal discussions also sparked ideas that could benefit not only Innovation Days, but also our regular work. The Final Pitch!!! On the evening of Day #2, everyone gathered for the final pitch. Each team was allotted 5 minutes plus 2 minutes for Q&A. Although we will omit the details of each idea, the pitches ranged from proposals for improving existing products and suggesting new features to introducing new services. Each team delivered unique pitches and demos. The presentation order was decided by drawing lots on the spot to ensure fairness. Not knowing when their turn would come added an element of excitement! Final Jury Review After all the pitches were finished, it was time for the group manager and four assistant managers to evaluate the teams. Although two of them were participating remotely, it left an impression on me to see the four of them concentrating as they judged the teams. Evaluation criteria for judgement is as follows 🔻🔻 Evaluation criteria Ratio Originality 10% UI/UX 10% Tech Skill 30% Team Work 20% Practicality, Feasibility 20% Excitement factor 10% As a result of the evaluation, the winning team was awarded an original smartphone stand and a cake. It was great that even the teams that didn’t win seemed to enjoy participating, creating a great atmosphere in the venue. We removed our masks only during the photo shoot. Summary As KINTO Technologies’ first event, organized by the Global Development G, we not only achieved our primary goal of enhancing communication but also generated new ideas for KINTO and explored new technologies that we hadn’t been able to use in our daily work. The event was well-received by all stakeholders, including participants, managers, and executives. Although we were on the organizing side, we gained various skills as an organizing team, such as responding to emergencies, facilitation, time management, and coordination skills. Most importantly, the bonds within our team were strengthened ✨ So far, we have written about the planning stage and the event itself from the perspective of the organizing team, but we will also be publishing an interview with the winning team from the perspective of the participants. Stay tuned!
アバター
Hi! I’m Ryomm, developing the iOS app my route at KINTO Technologies. I think there are still many scenarios where UITextView is needed, particularly when you want to use TextKit. I tried integrating UITextView with SwiftUI using UIViewRepresentable, but I ran into difficulties adjusting the height. This article details how I resolved that issue. Approach Here’s how you can resolve the issue. import UIKit struct TextView: UIViewRepresentable { var text: NSAttributedString func makeCoordinator() -> Coordinator { Coordinator(self) } func makeUIView(context: Context) -> UITextView { let view = UITextView() view.delegate = context.coordinator view.isScrollEnabled = false view.isEditable = false view.isUserInteractionEnabled = false view.isSelectable = false view.backgroundColor = .clear view.textContainer.lineFragmentPadding = 0 view.textContainerInset = .zero return view } func updateUIView(_ uiView: UITextView, context: Context) { uiView.attributedText = text } func sizeThatFits(_ proposal: ProposedViewSize, uiView: UITextView, context: Context) -> CGSize? { guard let width = proposal.width else { return nil } let dimensions = text.boundingRect( with: CGSize(width: width, height: CGFloat.greatestFiniteMagnitude), options: [.usesLineFragmentOrigin, .usesFontLeading], context: nil) return .init(width: width, height: ceil(dimensions.height)) } } extension TextView { final class Coordinator: NSObject, UITextViewDelegate { private var textView: TextView init(_ textView: TextView) { self.textView = textView super.init() } func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool { return true } func textViewDidChange(_ textView: UITextView) { self.textView.text = textView.attributedText } } } Background color is added for clarity Explanation In the makeUIView() function, setting view.isScrollEnabled to false caused an issue. By using setContentHuggingPriority() and setContentCompressionResistancePriority() , line breaks were restored even when scrolling was disabled. However, the vertical display area was not adjusting correctly. When displaying text with more than two lines, any text that exceeded the vertical area was cut off. func makeUIView(context: Context) -> UITextView { let view = UITextView() view.delegate = context.coordinator view.isScrollEnabled = false view.isEditable = false view.isUserInteractionEnabled = false view.isSelectable = true view.backgroundColor = .clear // For example like this? view.setContentHuggingPriority(.defaultHigh, for: .vertical) view.setContentHuggingPriority(.defaultHigh, for: .horizontal) view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) view.setContentCompressionResistancePriority(.required, for: .vertical) view.textContainer.lineFragmentPadding = 0 view.textContainerInset = .zero return view } (・〜・) So I decided to use sizeThatFits() . This is a method available from iOS 16 that can be overridden in UIViewRepresentable. By using this method, you can specify the size of the view based on the proposed size from the parent view. In this case, I wanted to use NSAttributedString for the text passed to the view, so I calculated the height of the provided text. For the method to calculate the height, I referred to this article . func sizeThatFits(_ proposal: ProposedViewSize, uiView: UITextView, context: Context) -> CGSize? { guard let width = proposal.width else { return nil } let dimensions = text.boundingRect( with: CGSize(width: width, height: CGFloat.greatestFiniteMagnitude), options: [.usesLineFragmentOrigin, .usesFontLeading], context: nil) return .init(width: width, height: ceil(dimensions.height)) } If this is all, the view’s area becomes larger than the size calculated by sizeThatFits() , so I added the following two settings to makeUIView() to remove the padding: textView.textContainer.lineFragmentPadding = 0 textView.textContainerInset = .zero Completed ◎ Conclusion After quite a bit of trial and error, I discovered that using sizeThatFits() gives me the correct size. That insight inspired me to write this article🤓
アバター
Introduction Hello, and thank you for visiting. My name is ITOYU and I am in charge of front-end development in the new car subscription development group of the KINTO ONE Development Department. Nowadays, it is common to use frameworks such as Vue.js, React, and Angular when creating web applications. The New Car Subscription Development Group is also using React and Next.js for development. Libraries and frameworks are frequently updated, such as the release of React version 19 and Next.js version 15. Each time that happens, you need to get up to speed with the new features and changes, and update your knowledge. In addition, the evolution of front-end technology has been remarkable in recent years. Libraries and frameworks that were in use until a few months ago become obsolete in a matter of months, and it is not uncommon for new libraries and frameworks to appear either. With these being the circumstances, front-end developers need to be on constant lookout out for new technologies, libraries, and frameworks, gather information, and keep on learning. This is the rule of front-end development, and also the joy of front-end development. With passion and insatiable curiosity, front-end developers seek to learn and master new technologies, libraries, and frameworks to improve their skills, develop better web applications efficiently, pursue best practices, and become front-end gurus . However, at the root of the libraries and frameworks in a front end, there is JavaScript. Do we really understand JavaScript 100% and use it with total mastery? Is it really possible to master libraries and frameworks without fully understanding JavaScript's core features? Can we really call ourselves front-end gurus? Personally, I cannot confidently answer “yes” to that question. So, in order to become a front-end expert, I decided to relearn JavaScript and fill in the gaps in my knowledge. The purpose of this article As a first step, goal is to learn about the basic JavaScript concept of Scope and understand it more deeply. You may think this is too basic! I'm sure most front-end engineers use scope without even thinking about it. However, when it comes to putting the concept of scope and the knowledge and names related to it into words, it is surprisingly difficult to do. This article aims to provide a better understanding of the types of scopes to help you understand the concept of scope. Reading it will most likely not furnish you with any new implementation methods. However, understanding the concept of scope should help you understand how JavaScript behaves, and lay the foundations for writing better code. :::message The JavaScript code and concepts contained in this article are explained based on the assumption that they will be running in a browser. Please be aware that they might produce different behavior in a different environment (such as Node.js). ::: Scope In JavaScript, there is a concept called Scope. Scope is the range within which variables and functions can be referenced by running code . First, let us take a look at the following types of scope: Global scope Function scope Block scope Module scope Global scope Global scope refers to the scope that can be accessed from anywhere within the program. The way to make a variable or function have global scope is roughly as follows: Variables added to the properties of the global object Variables that have script scope Variables added to the properties of the global object You can give variables and functions global scope by adding them to the properties of the global object. The global object differs depending on the environment; in a browser environment it is the window object, and in a Node.js environment it is the global object. In this example, I will assume you are coding for a browser environment, and show you how to add properties to the “window” object. The way to do it is to declare variables and functions with “var.” Variables and functions declared with “var” get added as properties of the global object, and can be referenced from anywhere. // A variable added to the properties of the “window” object var name = 'KINTO'; console.log(window.name); // KINTO You can also omit the window object when calling variables added to the global object. // Calling a variable by omitting the “window” object var name = 'KINTO'; console.log(name); // KINTO Script-scoped variables Script scope is the scope in which variables and functions declared at the top level of a JavaScript file or at the top level of a script element can be accessed. Variables and functions declared at the top level with “let” or “const” will have script scope. <! -- Variables that have script scope --> <script> let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation </script> Top level “Top level” means outside any functions or blocks. This explanation of top-level declarations may seem confusing, so let's take a look at the following examples to see the difference between variables declared at the top level and those that are not: <!-- Variables that have been declared at the top level --> <script> let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation </script> <! -- Variables that have not been declared at the top level --> <script> const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined if (true) { const company = ’KINTO Technologies Corporation’; console.log(company); // KINTO Technologies Corporation } console.log(company); // ReferenceError: company is not defined </script> In the code above, the variable name declared in the function getCompany and the variable company declared in the if statement can only be referenced from within the function itself or the “if” statement’s block. Differences between the global object and script scope Variables declared at the top level with “let” or “const” will have global scope and can be referenced anywhere, just like ones declared with “var.” However, unlike variables declared with “var,” ones declared with “let” or “const” do not get added to the properties of the global object. // Variables declared with “let” or “const” do not get added to the properties of the global object let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(window.name); // undefined console.log(window.company); // undefined :::message Handle the global object with care: Adding variables and functions to the properties of the global object using “var” should be avoided, because it can cause the global object to get contaminated. This is because having the same variable and function names appearing in different scripts can lead to unexpected behavior. So, if you want to make a variable have global scope, the recommended way is to declare it with “let” or “const.” ::: Function scope As mentioned in the previous example of variables without script scope, variables and functions declared within curly brackets {} within a function can only be referenced within that function. This is called ** function scope **. const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined Since the variable “name” is declared inside the function “getCompany,” it can only be referenced inside that function. So if you try to reference the name variable from outside the function, an error will occur. Block scope The example above showing variables that do not have script scope also featured variables and functions declared within a range enclosed by curly brackets. These can only be referenced within that block. This is called block scope . if (true) { let name = 'KINTO'; const company = ’KINTO Technologies Corporation’; console.log(name); // KINTO console.log(company); // KINTO Technologies Corporation } console.log(name); // ReferenceError: name is not defined console.log(company); // ReferenceError: company is not defined Variables declared with “let” or “const” like this will have block scope. Variables declared inside curly brackets can only be referenced inside the curly brackets. :::message Function declarations and block scope Functions declared inside a block will not have block scope, so they can also be referenced from outside the scope. Note: The results can vary depending on the JavaScript version and runtime environment. if (true) { function greet() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // Hello, KINTO So, if you want to make a function have block scope, the recommended way is to declare a variable with block scope, and assign the function to that. if (true) { const greet = function() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // ReferenceError: greet is not defined ::: Module scope Module scope is the referenceable scope of variables and functions declared inside a module. This means that variables and functions inside a module can only be accessed within that module, and cannot be referenced directly from outside it. In order to reference variables or functions declared inside a module from outside it, you need to expose them to the outside using export , and import them into the file using import . For example, I will declare some variables in the file module.js as follows: // module.js export const name = 'KINTO'; export const company = ’KINTO Technologies Corporation’; const category = ’subscription service’; // This variable has not been exported, so it cannot be referenced from outside. The exported variables can be referenced by importing them into other files. // Calling variables with module scope import { name, company } from './module.js'; console.log(name); // Output: KINTO console.log(company); // Output: KINTO Technologies Corporation // This line causes an error because `category` has not been exported. console.log(category); // ReferenceError: category is not defined Trying to reference a variable that has not been exported will generate an error. This is because the module scope is hiding the variable from the outside. // Calling a variable that does not have module scope import { category } from './module.js'; // SyntaxError: The requested module './module.js' does not provide an export named 'category' console.log(category); // The import fails, so this line cannot be run. This shows how understanding module scope is extremely important for managing dependencies between modules in JavaScript. Summary Scope is the range within which variables and functions can be referenced by running code. Global scope is the scope that can be referenced from anywhere. Script scope is the referenceable scope of variables and functions declared at the top level of either a JavaScript file or “script” element. Function scope is the referenceable scope of variables and functions declared inside curly brackets enclosed within a function. Block scope is the referenceable scope of variables and functions declared within a range enclosed by curly brackets. Module scope is the scope that is only referenceable inside a module. This time, we explored the different types of scope in JavaScript. In the next article, I'll introduce additional concepts related to scope.
アバター
Introduction Hello! I'm Cui from the Global Development Division at KINTO Technologies. I'm currently involved in the development of KINTO FACTORY , and this year, I collaborated with team members to investigate the cause of memory leaks in our web service, identify the issues, and implement fixes to resolve them. This blog post will outline the investigation approach, the tools utilized, the results obtained, and the measures implemented to address the memory leaks. Background The KINTO FACTORY site that we are currently developing and managing operates a web service hosted on AWS ECS. This service utilizes a member platform, an authentication service, and a payment platform, a payment processing service, both of which have been developed and managed by our company. In January of this year, the CPU utilization of the ECS task for this web service spiked abnormally, leading to a temporary outage and making the service inaccessible. During this time, an incident occurred where a 404 error and an error dialog were displayed during certain screen transitions or operations on the KINTO FACTORY site. A similar memory leak occurred last July, which was traced to frequent Full GCs (cleanup of the old generation), leading to a significant increase in CPU utilization. In such cases, a temporary solution is to restart the ECS task. However, it is crucial to identify and resolve the root cause of the memory leak to prevent recurrence This article outlines the investigation and analysis of these events, offering solutions based on the findings from these cases. Summary of Investigation Findings and Results Details of Investigation First, a detailed analysis of the event that occurred in this case revealed that the abnormally high CPU utilization of the web service was a problem caused by frequent Full GCs (cleanup of the old generation). Typically, after a Full GC is performed, a significant amount of memory is freed, and it shouldn't need to occur again for some time. Nonetheless, the frequent occurrence of Full GCs is likely to be excessive consumption of memory in use, suggesting that a memory leak is occurring. To test this hypothesis, we replicated the memory leak by continuously calling the APIs over an extended period, focusing primarily on those that were frequently called during the timeframe when the memory leak occurred. test this hypothesis, we reproduced the memory leak by continuously calling APIs for an extended period, mainly for those that were frequently called during the period when the memory leak occurred. The memory status and dumps were then analyzed to pinpoint the root cause of the issue. The tools used for the investigation were: API traffic simulation with JMeter Monitoring memory state using VisualVM and Grafana (local and verification environments) Filtering frequently called APIs with OpenSearch Additionally, here's a brief explanation of the "old generation" memory frequently mentioned: In Java memory management, the heap is divided into two parts, the young generation and the old generation. The young generation consists of newly created objects. Objects that persist for a certain duration in this space are gradually moved through the survivor spaces into the old generation. The old generation stores long-lived objects, and when it becomes full, a Full GC will occur. The survivor spaces are part of the young generation that track how long objects have survived. Result A significant number of new connection instances were being created during external service requests, leading to memory leaks caused by excessive and unnecessary memory consumption. Details of Investigation 1. Identify frequently called APIs To get started, we created a dashboard in OpenSearch summarizing API calls to understand the most frequently called processes and their memory usage. 2. Continue invoking the identified APIs in the local environment for 30 minutes, and afterward, analyze the results. Investigation Method To reproduce the memory leak in the local environment and capture a memory dump for root cause analysis, we used JMeter with the following settings to call the APIs continuously for 30 minutes. JMeter settings Number of threads: 100 Ramp-up period*: 300 seconds Test environment Mac OS Java version: OpenJDK 17.0.7 2023-04-18 LTS Java configuration: -Xms1024m -Xmx3072m *Ramp-up period is the amount of time in seconds during which the specified number of threads will be started and executed. Result and hypothesis No memory leak occurred. We assumed that the memory leak was not reproduced because it was different from the actual environment. Since the actual environment runs on Docker, we decided to put the application in a Docker container and validate it again. 3. Continue calling the APIs again in the Docker environment, then analyze the results Investigation Method To reproduce the memory leak in the local environment, we used JMeter with the following settings and kept calling the APIs for one hour. JMeter settings Number of threads: 100 Ramp-up period: 300 seconds Test environment Local Docker container on Mac Memory limits: 4 GB CPU limits: 4 cores Results No memory leak occurred even if the environment is changed in the local environment. Hypothesis Different from actual environment No external APIs are being called API calls for an extended period may gradually accumulate memory Objects that are too large may not fit into survivor spaces and end up in the old generation Since the issue could not be reproduced in the local environment, we decided to re-validate it in a verification environment that closely mirrors the production environment. 4. Continue making requests to the relevant external APIs in the verification environment for an extended period, and then analyze the results for any anomalies or issues. Investigation Method To reproduce the memory leak in the verification environment, we used JMeter with the following settings and kept calling the APIs. Called APIs: total 7 Duration: 5 hours Number of users: 2 Loop: 200 (Planned 1000, but changed to 200 due to low actual orders) Total Factory API calls: 4000 Affected external platforms: Member platform (1600), Payment platform (200) Results No Full GC occurred, and memory leak was not reproduced. Hypothesis No Full GC was triggered because the number of loops was low. While memory usage was increasing, it had not yet reached the upper threshold. We will increase the number of API calls and reduce the memory limit to trigger a Full GC for further analysis. 5. Reduce the memory limit and continue hitting the APIs over an extended period to observe memory behavior and potential GC activity. Investigation Method We lowered the memory limit in the verification environment and kept calling the member platform-related APIs in JMeter for 4 hours. Duration: 4 hours APIs: 7 APIs the same as last time Frequency: 12 loops per minute (5 seconds per loop) Member platform call frequency: 84 times per minute Number of member platform calls in 4 hours: 20164 Dump acquisition settings: export APPLICATION_JAVA_DUMP_OPTIONS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/app/ -XX:OnOutOfMemoryError="stop-java %p;" -XX:OnError="stop-java %p;" -XX:ErrorFile=/var/log/app/hs_err_%p.log -Xlog:gc*=info:file=/var/log/app/gc_%t.log:time,uptime,level,tags:filecount=5,filesize=10m' ECS memory limit settings: export APPLICATION_JAVA_TOOL_OPTIONS='-Xms512m -Xmx512m -XX:MaxMetaspaceSize=256m -XX:MetaspaceSize=256m -Xss1024k -XX:MaxDirectMemorySize=32m -XX:-UseCodeCacheFlushing -XX:InitialCodeCacheSize=128m -XX:ReservedCodeCacheSize=128m --illegal-access=deny' Results The memory leak was successfully reproduced and a dump was obtained. If you open the dump file in IntelliJ IDEA, you can see detailed memory information. A detailed analysis of the dump file revealed that a significant number of new objects were being created with each external API request. Additionally, some utility classes were not being managed as singletons, contributing to the issue. 6. Heap dump analysis results We found that 5,410 HashMap$Node were created in reactor.netty.http.HttpResources , which occupies 352,963,672 bytes (83.09%). Identification of memory leak location There is a leak in channelPools(ConcurrentHashMap) in reactor.netty.resources.PooledConnectionProvider , and we focused on the storing and retrieving logic. poolFactory(InstrumentedPool) Retrieving location Create holder(PoolKey) with channelHash obtained from remote(Supplier<?extends SocketAddress>) and config(HttpClientConfig) Retrieve poolFactory(InstrumentedPool) from channelPools with holder(PoolKey) . Return an existing similar key if it exists, or create a new one if not. The cause of the leak is that the same setting is not considered the same key: reactor.netty.resources.PooledConnectionProvider public abstract class PooledConnectionProvider<T extends Connection> implements ConnectionProvider { ... @Override public final Mono<? extends Connection> acquire( TransportConfig config, ConnectionObserver connectionObserver, @Nullable Supplier<? extends SocketAddress> remote, @Nullable AddressResolverGroup<?> resolverGroup) { ... return Mono.create(sink -> { SocketAddress remoteAddress = Objects.requireNonNull(remote.get(), "Remote Address supplier returned null"); PoolKey holder = new PoolKey(remoteAddress, config.channelHash()); PoolFactory<T> poolFactory = poolFactory(remoteAddress); InstrumentedPool<T> pool = MapUtils.computeIfAbsent(channelPools, holder, poolKey -> { if (log.isDebugEnabled()) { log.debug("Creating a new [{}] client pool [{}] for [{}]", name, poolFactory, remoteAddress); } InstrumentedPool<T> newPool = createPool(config, poolFactory, remoteAddress, resolverGroup); ... return newPool; }); As the name suggests, channelPools are objects that hold channel information and are reused when similar requests come in. The PoolKey is created based on the hostname and HashCode in the connection settings, and the HashCode is further used. channelHash Retrieving location Hierarchy of reactor.netty.http.client.HttpClientConfig Object + TransportConfig + ClientTransportConfig + HttpClientConfig Lambda expression com.kinto_jp.factory.common.adapter.HttpSupport passed to PooledConnectionProvider L5 the Lambda expression defined here is passed to PooledConnectionProvider as config#doOnChannelInit . abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } 7. Behavior when retrieving channelPools (illustration) Key match case (Normal) The information present in channelPools is the key and InstrumentedPool is reused. Key mismatch case (Normal) The information that does not exist in channelPools is the key and InstrumentedPool is newly created. This case (Abnormal) The information present in channelPools is the key, but InstrumentedPool is not reused and is newly created. Correction and verification of problems Correction Rewrite the Lambda expression in question to a property call Before correction abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } After correction abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connTimeout) .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } Verification Prerequisites Call MembersHttpSupport#members(memberId: String) 1000 times. Check the number of objects stored in PooledConnectionProvider#channelPools . Results before correction When executed in the state before correction, we found that 1000 objects were stored in PooledConnectionProvider#channelPools (cause of the leak). Results after correction When executed in the state after correction, we found that one object is stored in PooledConnectionProvider#channelPools (leak resolved). Summary Through this investigation, we successfully identified the cause of the memory leak in KINTO FACTORY's web service and resolved the issue by implementing the necessary corrections. Specifically, the memory leak was caused by the creation of a large number of new objects during external API calls. This issue was resolved by replacing the Lambda expression with a property call, which reduced object creation. Through this project, the following important lessons were learned: Continuous monitoring : We recognized the importance of continuous monitoring through abnormal ECS service CPU utilization and frequent Full GCs. By continuously monitoring system performance, potential issues can be detected early and addressed promptly, preventing them from escalating. Early problem identification and countermeasures : By suspecting memory leaks in the web service and repeatedly calling the APIs over an extended period to reproduce the issue, we identified that a large number of new objects were being created during external service requests. This allowed us to quickly identify the cause of the issue and implement appropriate corrections. Importance of teamwork : When tackling complex issues, the key to success lies in effective collaboration and teamwork, with all members working together towards a common goal. This correction and verification were accomplished through the cooperation and effort of the entire development team. In particular, the collaboration at each stage—investigation, analysis, correction, and verification—was crucial to the success of the process. During the investigative phase, there were many challenges. For example, reproducing memory leaks was difficult in the local environment, and it was necessary to re-validate in a verification environment similar to the actual environment. It also took a lot of time and effort to reproduce memory leaks and identify the cause by making prolonged calls to the external API. However, by overcoming these challenges, we were ultimately able to resolve the problem, leading to a strong sense of accomplishment and success. Through this article, we have shared practical approaches and valuable lessons learned to enhance system performance and maintain long-term stability. We hope this information will be helpful to developers facing similar challenges, providing them with insights to address such issues effectively. That's all for now!
アバター
Introduction I'm Wu from the Global Development Group. I usually work as a project manager for web and portal projects. I recently started going to the boxing gym again. I want to work hard on muscle training and dieting! We introduced a heat map tool called Clarity from Microsoft to the website we are developing, so I'd like to talk about it. Background The global expansion of the mobility service KINTO is introduced on the Global KINTO Web , faces the challenges such as short page visit durations and high bounce rates among users. While Google Analytics allows us to check metrics like scroll depth and click-through rates, it doesn’t provide insights into user behavior or what captures their interests. Therefore, we decided to implement an analysis tool that allows us to monitor user behavior and easily identify issues. Reasons for Choosing Microsoft Clarity As mentioned earlier, Global KINTO Web is currently a relatively small website and is not a service site. Considering the cost-effectiveness, we needed a heatmap tool that was as affordable and easy to implement as possible. We evaluated popular tools such as User Heat, Mieruka Heatmap, Mouseflow, and User Insight. However, there were several reasons why we ultimately chose Clarity. First, it is provided by Microsoft, a company already integrated within KINTO Technologies. Secondly, it is entirely free. Additionally, Clarity allows us to grant permissions to team members, enabling collaborative management. The simple setup process and the minimal engineering workload required for implementation were also critical factors in our decision. Comparison table of popular tools. Tools Features Implementation method Price Microsoft Clarity ・Instant Heatmap: Shows where users clicked and how far they scrolled ・Session recording available (←very useful) ・Recording ・Google Analytics integration Provided by Clarity, embed HTML tags into the website Free User Heat ・Mouse Flow Heatmap ・Scroll Heatmap ・Click Heatmap ・Attention Heatmap Provided by User Heat, embed HTML tags into the website Free Mieruka Heatmap ・Three Heatmap functions ・Ad analysis feature ・Event segmentation feature ・A/B testing feature ・IP exclusion feature ・Customer Experience Improvement Chart ・Free Plan: 3,000 PV/month ・Paid Plan: Offers options like AB testing, etc. mouseflow ・Basically includes the above features, plus robust funnel setup and conversion user analysis ・Recording feature ・Form analysis feature (View details like input time, submission count, drop-off rate) Embed Mouseflow tracking code into the website Starter plan (11,000yen/month) to Enterprise plan What is Microsoft Clarity? Released on October 29, 2020, Microsoft Clarity is a free heatmap tool provided by Microsoft. According to the official website, it is a user behavior analytics tool that allows you to interpret how users interact with your website using features such as session replays and heatmaps. Microsoft Clarity Setup Create a new Project in Clarity. Paste Clarity’s tracking code into the header element of your pages. Integrate with Google Analytics. Dashboard The Dashboard provides a clear overview of your site’s status with unique metrics such as Dead Clicks, Quick Backs, Rage Clicks, and Excessive Scrolling. Dead Clicks Dead Clicks refer to instances where a user clicks or taps on an element on the page, but no response is detected. Dead Clicks You can see exactly where users clicked. It’s also easy to understand because user movements are recorded in videos. In the case of Global KINTO Web , panels introducing each service are frequently clicked, which suggests that users are seeking more detailed information. Quick Back Quick Back refers to when a user quickly returns to the previous page after viewing a page. This can happen when users quickly determine that the page is not what they were looking for, or when they accidentally click on a link. It helps identify parts of your website where navigation might be less intuitive or where users are more likely to make accidental clicks. Quick Back Rage Clicks Rage Clicks refer to when a user repeatedly clicks or taps on the same area multiple times. Rage Clicks Global KINTO Web, there were several users who were repeatedly clicking on a collection of links due to slow internet speeds. Upon investigation, it was found that this issue occurred specifically for users on the same operation system, leading to further device testing. Excessive Scrolling Excessive Scrolling refers to when users scroll through a page more than expected. This metric helps identify the percentage of users who are not thoroughly reading the content on a page. Excessive scrolling Heatmap Click Heatmap You can see how many times users clicked on which parts of the page. The left menu shows the ranking of the most clicked parts. Click maps Scroll Heatmap Scroll Heatmap shows how far users scroll down the page. The red areas indicate the most viewed sections, with the colors gradually changing from orange to green to blue, representing decreasing levels of engagement. Scroll maps Click Area Heatmap The Click Area Heatmap functions similarly to the Click Heatmap but allows you to see which extensive areas of the page are being clicked on. This helps determine whether the content placed on the page is being viewed. Area maps Recording User behavior is recorded in real-time. You can review the mouse cursor’s position, page scrolling, page transitions, and click actions through the video. Additionally, information about the user’s device, location, number of clicks, pages viewed, and the final page visited can be accessed from the left-hand menu. The ability to view the entire sequence of user actions in a realistic video format might be Clarity’s most compelling feature. Recordings overview Conclusion The Global KINTO Web iis still in development and has room for improvement. After deciding to implement a heatmap tool, we were able to release it in just about two weeks (0.5 person-months), thanks to the quality of Clarity and the ease of its implementation. While we are not yet fully utilizing all of its features, we plan to leverage this tool moving forward to provide an even better user experience.
アバター
A.K 自己紹介 my route開発GのAです。ラトビア出身です。 前職はスタートアップで、フルスタックで幅広く活動していました。 所属チームの体制は? 自分を含めて6人になります。 KTCへ入社したときの第一印象?ギャップはあった? 大手企業のグループですが、自分のチームは雰囲気が和やかで意外と働きやすいかと。 興味がある技術があれば基本的に勉強会があるのは本当にいいと思いました。 現場の雰囲気はどんな感じ? 意外と外国籍の方が多いけど、みんな技術レベル高いし、話しやすいと思います。 ブログを書くことになってどう思った? 得意じゃないですね。 M.Oさんからの質問:スマートホーム化されてるAさん、アレクサによく聞く質問はなんですか? そうですね、出かける前に「今日の天気は」は全体聞きます。 あと、Spotifyが紐づいてるので「この曲何」とか、「〇〇を流して」とかもほぼ毎日使ってます。 豆知識ですが、アレクサはTTSスピーカーとして使えるので、カスタムメッセージを流すのは好きです。 一番役に立ってるのは、簡単なものですが7時〜8時の間、5分ごとに時間+メッセージを流すこと。「もう7時35分だよ!出る気あるのおい! 」みたいな。 S.D 自己紹介 プロデュースG所属の出口です。 前職ではカーナビや地図データ関連の仕事をしていました。自然言語処理や機械学習も関わっていました。 所属チームの体制は? 自分を含めて全5名のグループです。各メンバーは個々で異なる案件に関わっています。 KTCへ入社したときの第一印象?ギャップはあった? 面談時から社内の雰囲気は説明いただけてたので、大きなギャップはありませんでした。 現場の雰囲気はどんな感じ? メインのやり取りはSlackで頻繁に行いつつも、対面でのやり取りも頻繁に行っていて、コミュニケーションがとりやすい環境だなと感じました。 ブログを書くことになってどう思った? 社外発信できる機会を設けているのは良いなと思いました。 過去事例やテックブログにも目を通すきっかけとなり、社内メンバーの情報を色々と得ることができました。 A.Kさんからの質問: 今まで集めたガジェットの中、一番役に立つのがどれだと思いますか? 1つに絞り切れないので、いくつかあげさせてください! Raspberry Pi:気軽にIoTにも挑戦できて実際に物を作り込む体験ができる素晴らしい製品!この価格でUbuntu(GUIあり)がきちんと動くのは驚き。 insta360 flow:この性能のジンバルをこのコストで楽しめるのはありがたい!被写体追跡も便利! みてねGPS:小さいお子さんには持たせたい。スマホはNGという場所でも持ち込めるのが良いポイントだと思う。バッテリーの持ちも良いのがいい。 K.N 自己紹介 分析Gのデータエンジニアリングチーム所属の西です。 所属チームの体制は? 分析Gはデータサイエンスチーム、データエンジニアリングチーム、データプロデュースチームの3チーム体制です。 KTCへ入社したときの第一印象?ギャップはあった? 社内勉強会が多いです! 現場の雰囲気はどんな感じ? 毎朝の朝会で、進捗や課題、相談事などを共有しています。 東京、名古屋、大阪の3拠点、在宅と出社の混合勤務体制のため、必要に応じてSlackのハドルで画面共有しながら会話します。 ブログを書くことになってどう思った? 過去記事など参考にしたので、他の入社社員のことを知れるきっかけになるなと思いました。 S.Dさんからの質問:今思えば入社時にこんな情報があればもっと良かった、こんな仕組みがあったら良かったと思う事があれば教えてください。 入社してビジネスモデルのオリエンテーションなど多く受講しましたが、3ヶ月後ぐらいに何らかの復習のような機会があると、より知識の定着があるのかなと思いました。 W ![W avatar](/assets/blog/authors/numami/maymember/4.png =200x) 自己紹介 人事グループ組織人事チームに所属している渡邉です。 人材会社での営業マネージャーやスタートアップで人事をしていました。 所属チームの体制は? 人事グループには組織人事チーム、採用チーム、労務総務チームがあります。 グループ全体で13名在籍しています。 KTCへ入社したときの第一印象?ギャップはあった? ギャップは特にありませんでした。 TOYOTAグループなので社内統制がしっかりしているとは思っていましたが、想定通りカッチリしていました。とはいえ、十分な自由度は確保できていると思います。 面談当時から良くも悪くも様々な社内の課題を聞いていたので、その点もギャップはありませんでした。 現場の雰囲気はどんな感じ? 皆さん、それぞれ前向きに業務に取り組んでいる印象です。 入社後1ヶ月は部長やMGR陣と面談をさせてもらったのですが、皆さん快く受けていただいたので安心してスタートすることが出来ました。 ブログを書くことになってどう思った? 社内外への発信に力を入れていて素晴らしいなと思いました。 大手傘下だと、社外発信に関して統制が聞いているのかな?と思ったのですが、その辺りはベンチャーっぽく自由度が高いと感じています。 K.Nさんからの質問:KTCでどんなことにチャレンジしたいですか? みんなが一丸となって進んでいけるような環境を創ることにチャレンジしたいです K 自己紹介 IT/IS部に所属してます。前職ではSier企業でMSインフラ、.Net系開発、情シス系運用業務等の仕事をしてました。 所属チームの体制は? IT/IS部にはAsset-Paltfor、Corporate-Engineering、Tech-Service、EnterpriseTechnologyの4チームで構成されてます。 私はその中のCorporate-Engineeringチームで、主に業務課題や要望をシステム導入/更改/改善でアプローチする業務に携わってます。 KTCへ入社したときの第一印象?ギャップはあった? ギャップは有りませんでした。 IT/IS部に所属する全員が「自分の業務タスクがどう価値を提供されるか?」を考えており、統制されている点ですばらしいと思ったのが第1印象です。 現場の雰囲気はどんな感じ? 普段は室町オフィスで業務してます。それぞれが気軽に相談しあい、自分事のように考えてくれる雰囲気です。 定期的にリーダー、マネージャー、部長との1on1ミーテイングで率直な意見、感想、要望、悩みなどを会話してます。そのため、1on1以外でも声をかけやすい雰囲気であると思います。 ブログを書くことになってどう思った? このブログ以外でも外界への発信を積極的に行っているので、シンプルに良い施策だなと思いました。 Wさんからの質問:最近行った面白かった場所(旅行先など)あれば教えてください。 最近引っ越しをしまして、遊びに来た大学時代の友人と銭湯へ行きました。昭和的な外観と雰囲気で風情があり、非日常感を得ることができました。いままで銭湯に対してあまり興味はありませんでしたが、リフレッシュする面白い場所として印象に残ってます。 JK ![JK avatar](/assets/blog/authors/numami/maymember/6.png =200x) 自己紹介 Toyota Woven City Payment Solution開発G所属の金です。 所属チームの体制は? 自分を含めて全6人のグループです。 実際Woven側の作業を行なってまして、チームにはKTC以外にWovenのメンバーがいます。 フロントエンド側の作業からバックエンド側、インフラ周りまで幅広い作業を行なってます。 KTCへ入社したときの第一印象?ギャップはあった? 様々な内容をオリエンテーション時に聞けて良かったと思います。 現場の雰囲気はどんな感じ? 基本的には週に1回スプリント計画をを立ち上げて目標通りの作業を行なってます。 TechtalkやDocumentReadingなどの時間を定期的に持ってます。 リモート作業が多いためSlack, Meet などを活用しチームメンバーとコミュニケーションしてます。 ブログを書くことになってどう思った? 記事を読んでくださる方に少しでも有効な情報を伝えたいと思いました。 Kさんからの質問:仕事をするのに一番大事にしていることは何か教えてください! どこでも同じだとは思いますが、人とのコミュニケーションが一番大事かと思います。特に開発側では機能要件などでコミュニケーションミスがあったりすると全然違うものが作られたりもするので(w) 他もう一つは継続です。継続することによって自分、自分の作業はもちろんチームまでより完成度が上がると思います。 M ![M avatar](/assets/blog/authors/numami/maymember/7.png =200x) 自己紹介 データ連携プラットフォームT所属のMです。 所属チームの体制は? プロダクトをメンテナンスしているのは私を含め2名でやっています。担当するプロダクトが多くのシステムと連携しているのでいろんな人とやり取りをすることが多いです。 KTCへ入社したときの第一印象?ギャップはあった? 勉強会や新しいツール, サービスの導入が積極的でびっくりしました。 現場の雰囲気はどんな感じ? 落ち着いた雰囲気で必要があれば会話をする感じです。 ブログを書くことになってどう思った? 特に何も思わなかったです。 JKさんからの質問:入社後作業のOnBoardingやCatchupはどのような形で行われたでしょうか。その際に感じた良かったところがあったら教えてください! まず最初に1on1でチーム体制、携わる予定のプロダクトの目的・立ち位置について説明いただきました!そのあと資料ベースでのマシンおよび開発環境のセットアップをしました!ここまでは普通のOnBoardingかなと思います!そのあとに業務理解のためにKINTO 新車の契約までの一連の流れをハンズオンでやりました!ハンズオンの資料が丁寧に作成されていて、車を借りる流れを理解できたのは良かったです! D ![D avatar](/assets/blog/authors/numami/maymember/8.png =200x) 自己紹介 my route開発GのDです。 所属チームの体制は? 自分を含めて6名です。 KTCへ入社したときの第一印象?ギャップはあった? 何かを発信する機会が多い会社だなと感じました。想像していたよりもゆるい雰囲気だなと思いました。 現場の雰囲気はどんな感じ? 個人で粛々と作業を進めることが多いです。チームメンバーは優しい方ばかりです。 ブログを書くことになってどう思った? 社外の方に読まれると思うと、とても緊張しました。 Mさんからの質問 :お気に入りのランチは見つけましたか?あれば教えてください! ビルの1Fに入っているインド料理のお店です。 M.O 自己紹介 モバイル開発G所属の大沼です。前職では音声プラットフォームVoicyのAndroidエンジニアとして従事しておりました。バックエンド(Go)やフロントエンド(Angular/TypeScript)、iOSアプリなども開発していました。 所属チームの体制は? my routeのAndroid開発チームは、私を含めて4名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 事前に日本国外出身のエンジニアが多いと聞いていましたが、予想以上に多かったです。社内勉強会が豊富で、学びやアウトプットの機会がたくさんあります。 現場の雰囲気はどんな感じ? 社内で各自の担当領域がしっかりと明確化していて業務やコーディングに集中できます。また、業務で得た知見をすぐに共有し合える環境だと思ってます。 ブログを書くことになってどう思った? 技術のことをアウトプットすることは好きです。 Dさんからの質問:もしあれば、入社してから最も困ったことを教えて下さい 在宅ルールに準拠しOutlookでの在宅の予定を追加することです。   
アバター
UI Guidelines
Good morning, good afternoon or good evening! This post is brought to you by Az from the Global Development UIUX Team who loves to take apart machines and take pictures while sipping delicious tea. UI Guidelines What are UI Guidelines in the first place? What kind of people use them? Who can become happy by using them? Let’s explore these questions step by step. Differences between Brand Guidelines and UI Guidelines Let me explain about the general differences between Brand Guidelines and UI Guidelines. KINTO does not have them publicly available, but we do have Brand Guidelines available. What are Brand Guidelines? They outline key branding rules to follow, including: The brand's philosophy and values Brand name usage and writing conventions Approved colors and imagery Required user experience elements *this image is for illustrative purposes Based on these rules, designers think about expression methods and designs. Reference: What are Brand Guidelines? What are UI Guidelines? UI guidelines provide clear, practical examples of valid design elements. Colors and shapes used for buttons, text, and so on The screen layout ratio How to use icons and images Here, you'll typically find components and specifications ready to be applied directly, both in design and implementation. What happens when you implement from a requirement? For example, imagine that below are the conditions that make a button "KINTO-like": It uses fixed colors It's rectangular It has an easy-to-read label It can be identified as a button Do you have an idea in mind? Now, let's say we get the below results: All conditions are met, but there are parts that are different from the button we may have been expecting. First row: there are no rounded corners on the four sides Second row, left: the aspect ratio of the margin in the button is wrong Third row, left: it has shadows that are not used with the others If it was created under the same parameters, why don't all the details match up? The root of the issue lies in the lack of a common understanding When the same team members work together consistently, there's a strong likelihood they can collaborate effectively. However, in reality, both teams and its members are often subject to change. When team members experience getting feedbacks like "This is different from what we expected from you...'" after completing a task, the need to redo the work can lead to significant losses in both time and motivation. In the UI Guidelines, the main button is defined with precise specifications, such as rounded corners set to 30 px, top and bottom margins at 10 px, and a width of 1/12 of the screen. This ensures consistent output with no deviation. Concept and usage of UI Guidelines Designing according to this format will make the work easier for both the designer and the front-end team. Let me explain with some common, real-world examples. You can follow the guidelines without a designer No need to stress over minor details—the styles, including text size, are standardized with fixed settings such as unavoidable style presets. Often, improper sizing and margins lead to unstable quality. However, if you follow the guidelines for setting margins, the screen layout will remain well-organized and visually appealing, even without designer adjustments. Minimizes screen size issues A common issue with design files is handling screen size pixels. With the guidelines, ratios and breakpoints are predefined, ensuring there are no discrepancies. Many elements can be created using "standard specifications" Input form layout Input forms are a typical example of content with similar fields where additions and reordering are frequent. We’ve seen several changes in recent projects, but since the designs followed the guidelines, we were able to modify and implement them in parallel without issues. Message delivery Result screens and error screens often contain a large number of text elements and combinations. Since the layout for icons and text is fixed for each status, there was no need to prepare multiple patterns—only exceptions required special treatment. Fewer probles for everyone! Consistent output is achievable, regardless of differences in experience and skills. The guidelines serve two major purposes: reducing the need for verbal and written communication and maintaining a shared understanding across teams. I plan to keep improving the system to ensure we can continue resolving challenges and we can say "If you run into issues, just check the guideline and your problem will be solved!"
アバター
Hello, this is HOKA from the Human Resources Group. (I have also written an article in the past called Let's Create a Human Resources Group - Behind the Scenes of an Organization that Grew to 300 Employees in Three Years , so please take a look at that article as well.) On March 28, 2024, just three days before the end of 2023, 40 members of KINTO Technologies' Development Support Division from Osaka, Nagoya, and Tokyo gathered at the Google office in Shibuya, Tokyo to participate in the 10X Innovation Culture Program. Here is a report on the event. What is 10X Innovation Culture Program ? The 10X Innovation Culture Program is a leadership program designed to create an organizational environment that fosters innovation. It was launched by Google Japan in September 2023 . The program consists of three key elements: online training, assessment tools, and a solution packages. Through the online training, participants learn about the "Think 10X" concept. Assessment tools help participants understand their current position and identify issues. The solution packages provide strategies to solve those issues. This program allows participants to naturally integrate innovative ideas and knowledge into their own organizations. How it started Awache from our DBRE, is one of the management team members of Corporate Culture and Innovation Subcommittee , at Jagu'e'r and has shown a strong interest in the 10X Innovation Culture Program. When the program was launched, Awatchi organized a project to gather some volunteers from within the company to experience a light version of the program at the Google office. I participated, and that's how it all began. It was so enjoyable and I learned so much that when I shared it at the morning meeting the next day, everyone was enthusiastic about the idea. The manager suggested, "Let’s do this with the entire team!” and the division head immediately approved, saying, "If it's just for the Development Support Division, I can authorize it myself!" With this excitement, before we knew it, the event was quickly organized. Even before getting approval from the president or vice president, we had already decided to go ahead with this plan. I appreciate our company’s culture and sense of speed. From then on, we began a process of trial and error to see how we could conduct this on a large scale with over 40 people in the entire Development Support Division (lol). The road to implementation At first, we were thinking of conducting it just with our own team members, but that would make it difficult to expand to divisions other than the Development Support Division. To achieve the "10X Innovation Culture," we must fully understand it ourselves and be able to speak about it with confidence. Therefore, this time we decided to hold a 10X Culture Program at a Google office run by Google employees. The goal was to learn how to run the program effectively and to train the "facilitators" who could conduct it within our company in the future. When we carried out a survey within the Development Support Division to find members interested in becoming a facilitators, 17 members responded that they wanted to take on this role. Those from non-HR members were included, regardless of occupation or gender. (They participated in the 10X Culture Program with the intention of becoming facilitators.) Once the overall lineup was decided, two people from Google, Awatchi, and HOKA took the lead in planning the content. Drawing on the experience I gained from taking the course in October 2023, we decided to watch the videos and complete the assessments in advance so we could concentrate more on the discussions. Preparatory meeting online Watch 6 videos Take an assessment 10X innovation culture program at Google office Understanding trends in the Development Support Division from assessment results Conduct two discussions We held a preparatory meeting (March 20th) Awatchi also started preparations for the first preparatory meeting. However, the assessment tool provided didn't work as expected! Awatchi solved it with brute force. There were no assessments available in English! So we requested an English translation from an in-house specialist. There were no English videos either! So we used YouTube's translation tool. Various issues arose, and we received help from people both inside and outside the company. 24% of KINTO Technologies' employees are foreign nationals. This was a moment that made me realize once again the importance of being able to speak English. Awatchi acted as the facilitator for the preparatory meeting. We watched the 10X Innovation Culture Program videos one by one and then answered the assessments, repeating the process. At the end of the preparatory meeting, the results were shared on the spot via Looker Studio, an assessment tool. Participants were able to see trends within the Development Support Division overall and within each group, which made them more enthusiastic. ![AssessmentResults](/assets/blog/authors/hoka/20240611/assessment_result.png =750x) On the day (March 28th) Finally, the day arrived on March 28th. A total of 40 people from Tokyo, Osaka, and Nagoya gathered at the Google office. The event was held at the Google office, which is usually hard to get into, so I felt like a total tourist (lol). ![GatheringAtTheGoogleOffice](/assets/blog/authors/hoka/20240611/arriving.jpg =750x) The facilitators on the day were Rika and Kota from Google. At the opening, they explained what the "10X" in the 10X Innovation Culture Program stands for, using examples from Google. ![ScenesFromTheEvent](/assets/blog/authors/hoka/20240611/state.jpg =750x) Everyone listened attentively and took notes. And finally, the discussion began. To make the most of the limited time available, participants were divided into groups of around five to discuss "intrinsic motivation" and "risk-taking," areas which had shown potential for improvement in the preparatory assessment results. In the "intrinsic motivation" section, we discussed points such as "what is needed to approach daily work with passion" and "how can this be achieved within the company?" On the other hand, in the "risk-taking" section, participants exchanged opinions on topics such as "how to lower the psychological hurdles when taking on new challenges" and "how to create a culture that tolerates failure." The facilitator's role here was to lead each group. In this culture session discussion, it was important to ensure everyone had a chance to speak and to keep the discussions broad without focusing too much on individual points. The agreement for conducting the workshop, presented by Google, included the following points: Basic premises See it as an opportunity to learn Accept that mistakes are normal Notes Be aware of the impact your words have on those around you Interpret all opinions as being given in good faith. Don't share what others have said outside the group Let's Enjoy Google Culture! Let's call each other by nicknames These were very important points for making group work smooth and lively. Here is the actual discussion scene: ![Discussion1](/assets/blog/authors/hoka/20240611/discussion1.jpg =750x) ![Discussion2](/assets/blog/authors/hoka/20240611/discussion2.jpg =750x) ![Discussion3](/assets/blog/authors/hoka/20240611/discussion3.jpg =750x) ![Discussion4](/assets/blog/authors/hoka/20240611/discussion4.jpg =750x) ![Discussion5](/assets/blog/authors/hoka/20240611/discussion5.png =750x) ![Discussion6](/assets/blog/authors/hoka/20240611/discussion6.jpg =750x) The session was very exciting from start to finish, and at the same time, we were able to learn about our own challenges and understand what we could do to improve them. Here are the actual survey results. ![SurveyResult1](/assets/blog/authors/hoka/20240611/survey1.png =750x) ![SurveyResults2](/assets/blog/authors/hoka/20240611/survey2.png =750x) ![SurveyResults3](/assets/blog/authors/hoka/20240611/survey3.png =750x) We also received some nice comments from the two of the Google staffs! Rika Thank you for your hard work! This excitement was only possible thanks to the advance preparations made by everyone here, so let me once again express my gratitude to you all! We were also energized by the enthusiastic participants at the workshop.💪 I believe that the way you are moving forward with cultural transformation at your company will influence other companies as well! Kota Thank you very much for this valuable opportunity! I was overwhelmed by everyone's enthusiasm. I hope this will be the catalyst for KINTO's cultural development to move to the next stage. I'm rooting for you! We hope to develop further initiatives from here, so please continue to support us.😃 Epilogue As it was the end of March, it coincided with the goal-setting period at KINTO Technologies. After participating in this program, we heard many people say things like, " That’s not 10X, is it? ." This made me realize that each and every one of us has begun to think more seriously than ever before about what we want our organization to be. We also decided to introduce Google’s famous "20% rule" in the Development Support Division. Previously, there was a hesitation to adopt this program due to the impression that "only Google can do this." However, experiencing this program firsthand changed our mindset to "maybe we can do it too." Moreover, it has been decided that another event will be held by the Development Support Division, three months from now, at the end of June (which is coming up soon). This time we will run it ourselves. We are also preparing to introduce the program in other divisions. The role of facilitators will be crucial. How was it? If you feel that your company culture has challenges or if you want to improve your company, I highly recommend checking out the 10X Innovation Culture Program . You will surely gain valuable insights. Announcement Google Cloud Next Tokyo '24 Speaker confirmed👏👏 On August 2, 2024, our company's Development Support Division General Manager, Kishi, and Awatchi, who is promoting 10X within the company, will be speaking at Google Cloud Next Tokyo '24 to introduce the 10X Innovation Culture Program’s experience workshop. They will share our honest thoughts about how we felt through this experience, so please drop by if you have time.
アバター
Hello. I'm @hoshino from the DBRE team. In the DBRE (Database Reliability Engineering) team, our cross-functional efforts are dedicated to addressing challenges such as resolving database-related issues and developing platforms that effectively balance governance with agility within our organization. DBRE is a relatively new concept, so very few companies have dedicated organizations to address it. Even among those that do, there is often a focus on different aspects and varied approaches. This makes DBRE an exceptionally captivating field, constantly evolving and developing. For more information on the background of the DBRE team and its role at KINTO Technologies, please see our Tech Blog article, The need for DBRE in KTC . In this article, I will introduce the improvements the DBRE team experienced after integrating PR-Agent into our repositories. I will also explain how adjusting the prompts allows PR-Agent to review non-code documents, such as tech blogs. I hope this information is helpful. What is PR-Agent? PR-Agent is an open source software (OSS) developed by Codium AI, designed to streamline the software development process and improve code quality through automation. Its main goal is to automate the initial review of Pull Requests (PR) and reduce the amount of time developers spend on code reviews. This automation also provides quick feedback, which can accelerate the development process. Another feature that stands out from other tools is the wide range of language models available. PR-Agent has multiple functions (commands), and developers can select which functions to apply to each PR. The main functions are as follows: Review: Evaluates the quality of the code and identifies issues Describe: Summarizes the changes made in the Pull Request and automatically generates an overview Improve: Suggests improvements for the added or modified code in the Pull Requests Ask: Allows developers to interact with the AI in a comment format on the Pull Requests, addressing questions or concerns about the PR. For more details, please refer to the official documentation . Why we integrated PR-Agent The DBRE team had been working on a Proof of Concept (PoC) for a schema review system that utilizes AI. During the process, we evaluated various tools that offer review functionalities based on the following criteria: Input criteria: Ability to review database schemas based on the KIC’s Database Schema Design Guidelines Ability to customize inputs to the LMM to enhance response accuracy (e.g., integrating chains or custom functions) Output Criteria: To output review results to GitHub, we evaluated whether the following conditions could be met based on the outputs from the LLM: Ability to trigger reviews via PRs Ability to comment on PRs Ability to use AI-generated outputs to comment on the code (schema information) in PRs Ability to suggest corrections at the code level Despite our thorough investigation, we couldn’t find a tool that fully met our input requirements. However, during our evaluation, we decided to experiment with one of the AI review tools used internally in DBRE team, leading to the adoption of PR-Agent. The main reasons for choosing PR-Agent among the tools we surveyed, are as follows: Open source software (OSS) Possible to implement it while keeping costs down Supports various language models It supports a variety of language models, and you can select the appropriate language model to suit your needs. Ease of implementation and customization PR-Agent was relatively easy to implement and offered flexible settings and customization options, allowing us to optimize it for our specific requirements and workflows. For this project, we used Amazon Bedrock. The reasons for using it are as follows: Since KTC mainly uses AWS, we decided to try Bedrock first because it allows for quick and seamless integration. Compared to OpenAI's GPT-4, using Claude3 Sonnet through Bedrock reduced costs to about one-tenth. For these reasons, we integrated PR-Agent into the DBRE team's repository. Customizations implemented during PR-Agent integration Primarily, we followed the steps outlined in the official documentation for the integration. In this article, we’ll detail the specific customizations we made. Using Amazon Bedrock Claude3 We utilized the Amazon Bedrock Claude3-sonnet language model. Although the official documentation recommends using access key authentication, we opted for ARN-based authentication to comply with our internal security policies. - name: Input AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN_PR_REVIEW }} aws-region: ${{ secrets.AWS_REGION_PR_REVIEW }} Manage prompts in GitHub Wiki Since the DBRE team runs multiple repositories, it was necessary to centralize prompts references. After integrating PR-Agent, we also wanted team members to easily edit and fine-tune prompts. That’s when we considered using GitHub Wiki. GitHub Wiki tracks changes and makes it easy for anyone to edit. So we thought that by using it, team members would be able to easily change the prompt. In PR-Agent, you can set extra instructions for each function such as describe through the extra_instructions field in GitHub Actions. ( Official documentation ) #Here are excerpts from the configuration.toml [pr_reviewer] # /review # extra_instructions = "" # Add extra instructions here [pr_description] # /describe # extra_instructions = "" [pr_code_suggestions] # /improve # extra_instructions = "" Therefore, we customized the setup to dynamically add extra instructions (prompts) listed in the GitHub Wiki through variables in the GitHub Actions where PR-Agent is configured. Here are the configuration steps: First, generate a token using any GitHub account and clone the Wiki repository using GitHub Actions. - name: Checkout the Wiki repository uses: actions/checkout@v4 with: ref: main # Specify any branch (GitHub defaults is master) repository: {repo}/{path}.wiki path: wiki token: ${{ secrets.GITHUB_TOKEN_Foobar }} Next, set the information from the Wiki as environment variables. Read the contents of the file and set the prompts as environment variables. - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" Finally, configure the action steps for the PR-Agent. Read the content of each prompt from the environment variables. - name: PR Agent action step id: Pragent uses: Codium-ai/pr-agent@main env: # model settings CONFIG.MODEL: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.MODEL_TURBO: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.FALLBACK_MODEL: bedrock/anthropic.claude-v2:1 LITELLM.DROP_PARAMS: true GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} AWS.BEDROCK_REGION: us-west-2 # PR_AGENT settings (/review) PR_REVIEWER.extra_instructions: | ${{env.REVIEW_PROMPT}} # PR_DESCRIPTION settings (/describe) PR_DESCRIPTION.extra_instructions: | ${{env.DESCRIBE_PROMPT}} # PR_CODE_SUGGESTIONS settings (/improve) PR_CODE_SUGGESTIONS.extra_instructions: | ${{env.IMPROVE_PROMPT}} By following the steps outlined above, you can pass the prompts listed on the Wiki to PR-Agent and execute them. What we did to expand review targets to include tech blogs Our company’s tech blogs are managed in a Git repository, which led to the idea of using PR-Agent to review blog articles like code. Typically, PR-Agent is a tool specialized for code reviews. The Describe and Review functions worked somewhat when we tested it on blog articles. Still, the Improve function only returned "No code suggestions found for PR," even after adjusting the prompts (extra_instructions).(This behavior likely occurred because PR-Agent is designed primarily for code review.) To address this, we tested whether customizing the System prompt for the Improve function would enable it to review blog articles. After customization, we received responses from the AI, so we also decided to proceed with customizing the system prompts. System prompt refers to a prompt that is passed separately from the user prompt when invoking LLM. It also includes specific instructions on the items to be output and the format. The extra_instructions that I explained earlier are part of the system prompt, and it appears that if the user provides additional instructions in PR-Agent, those instructions are incorporated into the system prompt. # Here are the excerpts from the system prompt for Improve [pr_code_suggestions_prompt] system="""You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. omission {%- if extra_instructions %} Extra instructions from the user, that should be taken into account with high priority: ====== {{ extra_instructions }} # Add the content specified in the extra_instructions. ====== {%- endif %} omission PR-Agent allows you to edit system prompts from GitHub Actions, just like extra_instructions. By customizing the existing system prompts, we expanded the review capabilities to include not only code but also text. Below are some examples of our customizations: First, we modified the instructions specific to the code so they could be used to review tech blogs. System prompt before customization You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. # Japanese translation # あなたは PR-Reviewer で、Pull Request (PR) のコードを改善する方法を提案することに特化した言語モデルです。 # あなたのタスクは、PR diffで提示された新しいコードを改善するために、有意義で実行可能なコード提案を提供することです。 System prompt after customization You are a reviewer for an IT company's tech blog. Your role is to review the contents of .md files in terms of the following. Please review each item listed as a checkpoint and identify any issues. # Japanese translation # あなたはIT企業の技術ブログのレビュアーです。 # あなたの役割は、.mdファイルの内容を以下の観点からレビューすることです。 # チェックポイントとして示されている各項目を確認し、問題があれば指摘してください。 Next, we will modify the section with specific instructions so that you can review the tech blog. Changing the instructions regarding the output would affect the program, so we have customized it so that tech blogs can be reviewed by simply replacing code review instructions with text. System prompt before customization Specific instructions for generating code suggestions: - Provide up to {{ num_code_suggestions }} code suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new code in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR code. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat code already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the code, use backticks (`) instead of single quote ('). - Take into account that you are reviewing a PR code diff, and that the entire codebase is not available for you as context. Hence, avoid suggestions that might conflict with unseen parts of the codebase. System prompt after customization Specific instructions for generating text suggestions: - Provide up to {{ num_code_suggestions }} text suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new text in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR text. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat text already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the text, use backticks (`) instead of single quote ('). After that, add a new Wiki for system prompts, following the steps in "Managing prompts in a Wiki" explained earlier. - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" + set_env_var_from_file "IMPROVE_SYSTEM_PROMPT" "./wiki/pr-agent-improve-system-prompt.md" - name: PR Agent action step omission + PR_CODE_SUGGESTIONS_PROMPT.system: | + ${{env.IMPROVE_SYSTEM_PROMPT}} By following the steps outlined above, we customized PR-Agent’s Improve function, which typically specializes in code review, to support the review of blog articles. However, it’s important to note that the responses may not always be 100% as expected, even after modifying the system prompt. This is also true when using the Improve function for program code. Results of installing PR-Agent Implementing PR-Agent has brought the following benefits: Improved review accuracy It highlights issues we often overlook, improving the accuracy of our code reviews. It allows for the review of past closed PRs, providing opportunities to reflect on older code. Reviewing past PRs helps us continually enhance the quality and integrity of our codebase. Reduced burden of creating pull requests (PRs) The pull request summary feature makes creating pull requests easier. Reviewers can quickly see the summary, improving review efficiency and shortening merge times. Improved engineering skills Keeping up with rapid technological advances while managing daily duties can be challenging. The AI’s suggestions have been very effective in learning best practices. Tech Blog Reviews Implementing PR-Agent to our tech blog has reduced the burden of reviews. Although it's not perfect, it checks your articles for spelling mistakes, grammar issues, and consistency of content and logic, helping us find errors that are easy to overlook. Below is an example of a review of an actual tech blog ( Event Report DBRE Summit 2023 ). ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_describe_blog.png =800x) Summary of thePull Request (PR) for the tech blog by PR-Agent (Describe) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_01.png =800x) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_02.png =800x) Review of the Pull Request (PR) for the tech blog by PR-Agent (Review) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_improve_blog.png =800x) Proposed changes to the tech blog by PR-Agent (Improve) It is also important to note that a human being must make the final decision for the following reasons: The PR-Agent’s review results for the exact same Pull Requests (PR) can vary each time, and the accuracy of the responses can be inconsistent. PR-Agent reviews may generate irrelevant or completely off-target feedback Conclusion In this article, we introduced how the implementation and customization of PR-Agent have improved work efficiency. While complete review automation is not yet possible, through configuration and customization, PR-Agent plays a supportive role in enhancing the productivity of our development teams. We aim to continue using PR-Agent to improve efficiency and productivity further.
アバター
Introduction Hello! I'm Hasegawa , an Android engineer at KINTO Technologies! I usually work on developing an app called my route . Please check out the other articles written by members of my route's Android Team! Potential Bug Triggers in Android Development Due to Regional Preferences SwiftUI in Compose Multiplatform of KMP In this article, I will introduce how to get OG information in Kotlin (Android) and how to deal with character codes in the process. To be explained in this article What is OGP? How to get OGP in Kotlin The reason why text in the information obtained by OGP is corrupted How to deal with corrupted text What is OGP? OGP stands for "Open Graph Protocol" and is an HTML element that correctly shows the title and image of a web page when sharing it with other services. Web pages configured with OGP have meta tags that represent this information. The following is a meta tag that excerpts part of it. Services that want to get OG information can read information from these meta tags. <meta property="og:title" content="page title" /> <meta property="og:description" content="page description" /> <meta property="og:image" content=" thumbnail image URL" /> How to get OGP in Kotlin This time, I will use OkHttp for communication and Jsoup for HTTP parsing. First, use OkHttp to access the web page of the URL where you want to get OG information. I will omit error handling since it varies depending on the requirements. val client = OkHttpClient.Builder().build() val request = Request.Builder().apply { url("URL for wanted OG information") }.build() client.newCall(request).enqueue( object : okhttp3.Callback { override fun onFailure(call: okhttp3.Call, e: java.io.IOException) {} override fun onResponse(call: okhttp3.Call, response: okhttp3.Response) { parseOgTag(response.body) } }, ) Then parse the contents using Jsoup. private fun parseOgTag(body: ResponseBody?): Map<String, String> { val html = body?.string() ?: "" val doc = Jsoup.parse(html) val ogTags = mutableMapOf<String, String>() val metaTags = doc.select("meta[property^=og:]") for (tag in metaTags) { val property = tag.attr("property") val content = tag.attr("content") val matchResult = Regex("og:(.*)").find(property) val ogType = matchResult?.groupValues?.getOrNull(1) if (ogType != null && !content.isNullOrBlank()) { ogTags[ogType] = content } } return ogTags } Now ogTags has the necessary OG information. The reason why text in the information obtained by OGP is corrupted I think that I can get the OG information of most web pages correctly so far. However, for some web pages, corrupted text may occur. Here, I will explain the cause. This time, I called string() as shown below. val html = response.body?.string() ?: "" This function selects the character code in the following order of precedence: BOM (Byte Order Mark) information Response header charset UTF-8 unless specified in 1 and 2 More information can be found in the OkHttp repository comments . In other words, what if there is no BOM information, no response header charset, and a web page encoded in a non-UTF-8 format such as Shift_JIS? ... Text corruption occurs. Because it decodes with the default UTF-8. So what do we do? In the next section, I will explain how to respond. How to deal with corrupted text I found the cause of the corrupted text in the previous section. In fact, the character code may be specified in the HTML in the web page as follows. If there is no BOM information and the response header charset is not specified, this information could be used. <meta charset="UTF-8"> <!-- HTML5 --> <meta http-equiv="content-type" content="text/html; charset=Shift_JIS"> <!-- before HTML5 --> However, there is a contradiction that HTML must be parsed according to the character code in order to read the specified meta tag. Or so you might think. For example, UTF-8 and Shift_JIS are compatible in the range of ASCII characters, so it is not a problem to decode with UTF-8 once. (This method may parse twice. If you check the byte array of the meta tag beforehand, you may be able to determine the character code before parsing, but this time I focused on the code comprehensibility.) So, you can write code like the following. /** * Get the Jsoup Document from the response body * If the response body charset is not UTF-8, parse the charset again */ private fun getDocument(body: ResponseBody?): Document { val byte = body?.Bytes() ?: byteArrayOf() // If charset is specified in ResponseHeader, it is decoded with that charset val headerCharset = body?.contentType()?.charset() val html = String(byte, headerCharset ?: Charsets.UTF_8) val doc = Jsoup.parse(html) // If headerCharset is specified, the charset should parse correctly // return as is. If (headerCharset ! = null) { return doc } // Get the charset from the meta tag in the HTML. // If this charset is not present, the character code is unknown and the UTF-8 parsed doc is returned. val charsetName = extractCharsetFromMetaTag(html) ?: return doc val metaCharset = try { Charset.forName(charsetName) } catch (e: IllegalCharsetNameException) { Timer.w(e) return doc } // If the charset specified in the meta tag and UTF-8 are different, parse again with the charset specified in the meta tag // Parsing is a relatively heavy process, so don't double it. return if (metaCharset != Charsets.UTF_8) { Jsoup.parse(String(byte, metaCharset)) } else { doc } } /** * Get the charset string from the HTML meta tag * * Less than HTTP5 -> meta[http-equiv=content-type] * HTTP5 or higher -> meta [charset] * * @return charset character string ex) "UTF-8", "shift_JIS" * Null if @return charset is not found */ private fun extractCharsetFromMetaTag(html: String): String? { val doc = Jsoup.parse(html) val metaTags = doc.select("meta[http-equiv=content-type], meta [charset]") for (metaTag in metaTags) { if (metaTag.hasAttr("charset")) { return metaTag.attr("charset") } val content = metaTag.attr("content") if (content.contains("charset=")) { return content.substingAfter("charset=").split(";")[0].trim() } } return null } Then, let's change the function that creates the Jsoup Document as follows using the process that we just created. - val html = body?.String() ?: "" - val doc = Jsoup.parse(html) + val doc = getDocument(body) Conclusion Thank you for reading this far. Most web pages use UTF-8 character code, and even if you use a different character code, most of the time the charset is specified in the BOM or response header. Therefore, I do not think that this kind of problem will occur very often. However, if you find such a site, it may be difficult to understand the cause and how to fix it. I hope this article will help you.
アバター
Hello. My name is Zume, and I am the group manager of the Quality Assurance (QA) Group at KINTO Technologies. Although I have a long and extensive history in QA, I haven’t been particularly focused on sharing my experience or knowledge until now. However, I thought it would be a good idea to take some time to gather my thoughts, but before I knew it, 2022 came to an end with the ringing of the bells on New Year's Eve. It's tough to find time for myself when I’m usually busy with work. This has always been my excuse for not making time for personal projects. If I keep saying "I'll do it next month" a few more times, I'll soon find myself welcoming a new year. About test management This time, I would like to introduce the benefits of the test management tools used by my group and the journey we took to implement them. To all the QA engineers reading this article, how are you managing your test cases? Some of you may already be using some kind of paid test management tool. Generally, Excel or Spreadsheets tend to be used for managing test cases and test executions. However, when using Excel or Spreadsheets for test management, we encountered several challenges: Challenges in the test process - Issues - ⇒ Concerns (potential issues) Test case structuring often becomes personalized by the test designers, and case classifications and formats vary. ⇒When the designer changes, the handover process becomes complicated ⇒Due to the lack of standardized format, it takes time to understand cases when the project changes. To review the cases, you need to open and check the contents of files each time. ⇒It is difficult to share documents and know-how within the team. Stakeholders (other than QA) have a hard time getting an overview of the test content and results. ⇒The QA side needs to prepare reports for stakeholders. For regression testing, a new file needs to be created for each test cycle. ⇒It becomes difficult to track which cases were reused. It is difficult to follow the change history or update history of test cases. ⇒Maintenance, including case updates, takes a lot of time (plus Excel is not suitable for simultaneous online editing by multiple users) Since the test execution results are entered manually, the exact execution time is unknown. ⇒It is challenging to pinpoint the exact time when defects occur Test cases and bug reports are not linked ⇒It becomes difficult to compile statistics such as the defect rate for each function (manual compilation is possible but very tedious). And so on. To address these challenges, we considered implementing tools that support a series of test activities such as test requirements management, test case creation, test execution, and test reporting. In fact, we never considered using Excel or Spreadsheets from the beginning. This is because we knew from our experience that once Excel-based operations become ingrained, it takes a lot of time to shift away from them. Evaluation of tools to be implemented Initially, the tools we considered were: TestLink : An open source web-based test management tool. Free of charge. TestRail : A web-based test management tool. Paid. Zephyr for JIRA : A JIRA plugin. Paid. (Renamed to Zephyr Squad in 2021[^1]) [^1]: Zephyr for Jira is now Zephyr Squad , SmartBear Software., 2021 One of the reasons we considered TestLink was my experience with it at my previous workplace. Another advantage is that it can be tested right away by installing Docker even in a local environment. In fact, I once used a Mac for both testing and running TestLink. However, I joined KINTO Technologies in March 2020 (when it was still KINTO Co., Ltd.), and the project for which we planned to introduce the tool was scheduled to be released two months later, in May. To make things more challenging, the first state of emergency due to the spread of the new COVID-19 was declared in April during this period. In such a nerve-wracking situation, which tool did we choose as the most appropriate option? It was Zephyr for JIRA . The biggest advantage was that it could be implemented quickly as an add-on for JIRA, which was already being used within the company. Additionally, considering the unexpected shift to remote work during the COVID-19 pandemic, it was convenient since it could be accessed from outside the company. Although it was a paid tool, we decided to start using it with the idea that if we could get through the May release, we would reassess its continued use. Looking back at my notes from that time: Since it's a JIRA plugin, I thought I could change the language settings, but it seems only parts of it support Japanese. Zephyr's reports are based on scenarios, and there is no reporting function for individual test steps. etc. [^2] [^2]: ※It seems that requests for step-by-step reports have been made by users as early as 2013, according to the Atlassian community. However, in the comment , TestFLO was recommended as an alternative solution. These notes reflect our trial and error process. It brings back all the memories. Although it was easy to implement, it is still essential for users to be familiar with the system and possess the necessary skills. In that sense, I am still grateful to the team members who flexibly navigated that chaotic period with me. Using Zephyr It's been almost three years(!)Even though it’s an old story, here are my impressions of using Zephyr for JIRA. As it is a JIRA plugin, test cases can be created in the same way as normal issues by selecting the desired issue type. Case items include steps, expected values, results, status, comments, and file attachments, making it convenient to leave screen captures as evidence for each step. On the other hand, it took quite a long time to load the plugin itself. The problem was that it took a few seconds each time we changed screens. A similar question for help was posted on the Atlassian community, so it may be a Zephyr-specific issue . And now to TestLink Now, let's talk about test management after we somehow managed to meet the release schedule and handed off the project in May 2020. We reconsidered the cost aspect as well. Assuming the tool is linked to JIRA and the number of users is around 10 to 20 people, the prices as of 2020 were as follows: Zephyr for JIRA: 11-100 Users ¥510/user/month ⇒ ¥10,200/month for 20 users TestRail: 1-20 Users $32/user/month ⇒ $640 (approx. ¥83,200)/month for 20 users The prices as of 2023 are as follows: Zephyr Squad: 11-100 Users ¥570/user/month ⇒ ¥11,400/month for 20 users TestRail: 1-20 Users $37/user/month ⇒ $740 (approx. ¥96,200)/month for 20 users The fee structure has changed slightly since then, and the prices have gone up a bit. *All prices are calculated at 130 yen to the dollar At first glance, Zephyr seems like a good deal, but since it is a plugin for JIRA, you will actually need to have the same number of licenses as you do for JIRA. In that regard, since not everyone in the Development Division will use it and only QA members will, we want to avoid increasing costs as the organization expands. Still, TestRail is quite expensive. Considering the cost, there is no better option than the free TestLink. Although the UI of TestLink is not the best (it's open source so I won't complain), as a test management tool, it can at least resolve the issues mentioned above as follows. Testing process challenges and their solutions Challenges in the test process When the tool is implemented Concerns when using the tool 1. Test case structuring often becomes personalized by the test designers, and case classifications and formats vary. By describing test suites, test cases, test steps, etc. in a certain format with appropriate detail, a certain degree of granularity is achieved. Easy handover and case deciphering! 2. To review the cases, you need to open and check the contents of files each time. High visibility of implementation items and easy tracing to test requirements make it easy to understand coverage Documents can be easily shared within and outside the team! 3. It is difficult for stakeholders to get an overview of the test contents and results. With real-time tracking of test progress and results viewable in reports, there’s no need for QA to create reports! 4. For regression testing, a new file needs to be created for each test cycle. It can be used on a test suite basis It's easy to identify reusable components! 5. It is difficult to record the change history and update history of test cases. In addition to adding and modifying test cases, the history can be recorded. Case maintenance is easier! 6. Since the test execution results are entered manually, the exact execution time is unknown. Bug reports, execution times, and execution record are accurately logged. You can narrow down the implementation time period! 7. Test cases and bug reports are not linked Easier tracking of requirements/releases, such as test progress rate and defect occurrence rate It's easy to compile data such as the defect occurrence rate for each function! So, we decided to introduce TestLink from June 2020 onwards. Well, I'm sure my teammates will get annoyed if I say it's easy, but the truth is that while the tool isn’t omnipotent, it's a lot easier than using data files like Excel. Postscript Even though it's free, there are still infrastructure costs to run it. We are using an AWS instance for TestLink, which costs several tens of thousands of yen per year. It has been almost three years since we started using it, and so far we have been able to operate it without any major issues. In this article, I explained how we implemented TestLink as a test management tool in the QA group. In future posts, I hope to discuss how TestLink is used in actual projects, its integration with JIRA, and more.
アバター