Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 9 hours 1 min ago

Flocon de toile | Freelance Drupal: Change the position of the meta data panel on the node form with Drupal 8

Wed, 11/08/2017 - 10:00
Content metadata (menu settings, publishing options, url path settings, and so on) are by default displayed on the node form in a side panel. This has the advantage of giving immediate visibility on these options while writing its content. But there are use cases where the lateral position of these informations is detrimental to the general ergonomics, because reducing the space available for the content form. This can be the case, for example, if you use the Field Group module to structure and group the information you need to enter. No need here for a Drupal expert. Let's find out how we can make the position of these metadata customizable according to the needs and general ergonomics of the Drupal 8 project.

Agiledrop.com Blog: AGILEDROP: Top Drupal blogs from October

Wed, 11/08/2017 - 08:56
The October is over, so it's time we present you top Drupal blogs written in October by other authors.  Let's start with How to maintain Drush commands for Drush 8 and 9 and Drupal console with the same code base by Fabian Bircher from Nuvole. He shows us that the solution is actually really simple, it is all about separating the command discovery from the command logic. Check it out! Our second choice is Decoupled Drupal Hard Problems: Image Styles by Mateu Aguiló Bosch from Lullabot. He shows us the problems when back-end doesn't know anything about the front-end design. He presents a… READ MORE

Savas Labs: The cost of investing in Drupal 7 - why it's time for Drupal 8

Wed, 11/08/2017 - 00:00

In the second of a two-part series, we investigate Drupal 8's present value and help highlight sometimes hidden costs of developing on an older platform. Continue reading…

Morpht: Announcing Enforce Profile Field for Drupal 8

Tue, 11/07/2017 - 23:42


The Enforce Profile Field is a new module which allows editors to enforce the completion of one or more fields in order to access content on a site. It is now available for Drupal 8.

Sometimes you need to collect a variety of profile data for different users. The data may be needed for regulatory compliance or marketing reasons. In some cases you need a single field and in others it may be several. You may also wish to collect the information when a user access certain parts of the site.

The Enforce Profile Field module comes to the rescue in cases such as these, forcing users to complete their profile before being able to move onto the page they want to see. This may sound harsh, however, collecting data as you need it is a more subtle way of collecting data than enforcing it all at registration time.

The implementation consists mainly from a new Field Type called "Enforce profile" and hook_entity_view_alter().

The module works as follows
  1. Site builder defines a “form display” for the user type bundle and specify fields associated with it to collect data.
    1. The fields should not be required, as this allows the user to skip them on registration and profile editing.
    2. In addition the Form Mode Manager module can be used to display the “form display” as a "tab" on a user profile page.
  2. The site builder places an Enforce profile field onto an entity type bundle, such as a node article or page.
  3. The Enforce profile field requires some settings:
    1. A "User's form mode" to be utilized for additional field information extraction (created in the first step).
    2. An "Enforced view modes" that require some profile data to be filled in before being able to access them. You should usually select the "Full content" view mode and rather not include view modes like "Teaser" or "Search".
  4. The editor creates content, an article or page, and can then select which fields need to be enforced.
    1. The editor is provided with multi-select of "User's form mode" fields.
    2. Selecting nothing is equal to no access change, no profile data enforcement.
  5. A new user navigates to the content and is redirected to the profile tab and is informed that they need to complete the fields.
  6. Fields are completed, form submitted and the user redirected back to the content.
    1. In case the user doesn't provide all enforced fields, the profile tab is displayed again with the message what fields need to be filled in.
Why to use the Enforce Profile Field to collect an additional profile data?
  • You may need customer's information to generate a coupon or access token.
  • You may just want to know better with whom you share information.
  • Your users know exactly what content requires their additional profile data input rather than satisfying a wide range of requirements during registration. It just makes it easier for them.
  • The new profile data can be synced to a CRM or other system if required to.

Let us know what you think.
 

Acro Media: Video: How Commerce 2.x Makes Taxes Simple

Tue, 11/07/2017 - 21:44

 

 

Tax regulations can be ridiculously complicated, particularly in the U.S., but Drupal has your back. With more inclusions and better integrations out of the box, Commerce 2.x represents a significant improvement from Commerce 1.x. Watch this High5 video for details!

Commerce 2.x now includes:
  • Native integration with Avalara
    That means full integration for every region that Avalara handles. Integrations with Tax Cloud and TaxJar are also in the pipeline, so U.S.-based businesses will have a few different options.
  • Built-in tax rules for Canada and the EU (and more)
    These are now included right out of the box; no add-ons or third-party service required. As long as you stay up to date with your Commerce install, you will automatically get any new rules or changes. And if you sell to other countries, you can still build the tax rules and configure them yourself.
  • The ability to prescribe when a tax applies
    Besides being able to set what products a tax applies to and in what regions, you can now select when it applies. So if a tax rule is set to come into effect on January 1st, for instance, you can set that up way in advance and not have to be up at dawn on the big day to push a button. This functionality is also key when it comes to redoing old orders that were done under a different tax scheme.
As always, if you have questions about getting your site setup on Drupal Commerce 2, let us know! We'd love to help.

Agaric Collective: Conditional fields in Paragraphs using the Javascript States API for Drupal 8

Tue, 11/07/2017 - 14:03

While creating content, there are pieces of information that are only relevant when other fields have a certain value. For example, if we want to allow the user to upload either an image or a video, but not both, you can have another field for the user to select which type of media they want to upload. In these scenarios, the Javascript States API for Drupal 8 can be used to conditionally hide and show the input elements for image and video conditionally.

Note: Do not confuse the Javascript States API with the storage State API.

The basics: conditional fields in node forms

Let’s see how to accomplish the conditional fields behavior in a node form before explaining the implementations for paragraphs. For this example, let’s assume a content type has a machine name of article with three fields: field_image, field_video, and field_media_type. The field_image_or_video field is of type List (text) with the following values: Image and Video.

/** * Implements hook_form_alter(). */ function nicaragua_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) { if ($form_id == 'node_article_form' || $form_id == 'node_article_edit_form') { $form['field_ image']['#states'] = [ 'visible' => [ ':input[name="field_image_or_video"]' => ['value' => 'Image'], ], ]; $form['field_ video']['#states'] = [ 'visible' => [ ':input[name="field_image_or_video"]' => ['value' => 'Video'], ], ]; } }

Note that in Drupal 8, the node add and edit form have different form ids. Hence, we check for either one before applying the field states. After checking for the right forms to alter, we implement the fields’ states logic as such:

$form[DEPENDEE_FIELD_NAME]['#states'] = [ DEPENDEE_FIELD_STATE => [ DEPENDENT_FIELD_SELECTOR => ['value' => DEPENDENT_FIELD_VALUE], ], ];

DEPENDENT_FIELD_SELECTOR is a CSS selector to the HTML form element rendered in the browser. Not to be confused with a nested Drupal form structure.

Conditional fields in Drupal 8 paragraphs

Although hook_form_alter could be used in paragraphs as well, their deep nesting nature makes it super complicated. Instead, we can use hook_field_widget_form_alter to alter the paragraph widget before it is added to the form. In fact, we are going to use the widget specific hook_field_widget_WIDGET_TYPE_form_alter to affect paragraphs only.

For this example, let’s assume a content type has a machine name of campaign with an entity reference field whose machine name is field_sections. The paragraph where we want to apply the conditional logic has a machine name of embedded_image_or_video with the following fields: field_image, field_video, and field_image_or_video. The field_image_or_video field is of type List (text) with the following values: Image and Video.

/** * Implements hook_field_widget_WIDGET_TYPE_form_alter(). */ function nichq_field_widget_paragraphs_form_alter(&$element, \Drupal\Core\Form\FormStateInterface $form_state, $context) { /** @var \Drupal\field\Entity\FieldConfig $field_definition */ $field_definition = $context['items']->getFieldDefinition(); $paragraph_entity_reference_field_name = $field_definition->getName(); if ($paragraph_entity_reference_field_name == 'field_sections') { /** @see \Drupal\paragraphs\Plugin\Field\FieldWidget\ParagraphsWidget::formElement() */ $widget_state = \Drupal\Core\Field\WidgetBase::getWidgetState($element['#field_parents'], $paragraph_entity_reference_field_name, $form_state); /** @var \Drupal\paragraphs\Entity\Paragraph $paragraph */ $paragraph_instance = $widget_state['paragraphs'][$element['#delta']]['entity']; $paragraph_type = $paragraph_instance->bundle(); // Determine which paragraph type is being embedded. if ($paragraph_type == 'embedded_image_or_video') { $dependee_field_name = 'field_image_or_video'; $selector = sprintf('select[name="%s[%d][subform][%s]"]', $paragraph_entity_reference_field_name, $element['#delta'], $dependee_field_name); // Dependent fields. $element['subform']['field_image']['#states'] = [ 'visible' => [ $selector => ['value' => 'Image'], ], ]; $element['subform']['field_video']['#states'] = [ 'visible' => [ $selector => ['value' => 'Video'], ], ]; } } }

Paragraphs can be referenced from multiple fields. If you want to limit the conditional behavior you can check the name of the field embedding the paragraph using:

$field_definition = $context['items']->getFieldDefinition(); $paragraph_entity_reference_field_name = $field_definition->getName();

If you need more information on the field or entity where the paragraph is being embedded, the field definition (instance of FieldConfig) provides some useful methods:

$field_definition->getName(); // Returns the field_name property. Example: 'field_sections'. $field_definition->getType(); // Returns the field_type property. Example: 'entity_reference_revisions'. $field_definition->getTargetEntityTypeId(); // Returns the entity_type property. Example: 'node'. $field_definition->getTargetBundle(); // Returns the bundle property. Example: 'campaign'.

In Drupal 8 it is a common practice to use the paragraph module to replace the body field. When doing so, a single field allows many different paragraph types. In that scenario, it is possible that different paragraph types have fields with the same name. You can add a check to apply the conditional logic only when one specific paragraph type is being embedded.

$widget_state = \Drupal\Core\Field\WidgetBase::getWidgetState($element['#field_parents'], $paragraph_entity_reference_field_name, $form_state); $paragraph_instance = $widget_state['paragraphs'][$element['#delta']]['entity']; $paragraph_type = $paragraph_instance->bundle();

The last step is to add the Javascript states API logic. There are two important things to consider:

  • The paragraph widget are added under a subform key.
  • Because multiple paragraphs can be referenced from the same field, we need to consider the order (i.e. the paragraph delta). This is reflected in the DEPENDENT_FIELD_SELECTOR.
$element['subform'][DEPENDEE_FIELD_NAME]['#states'] = [ DEPENDEE_FIELD_STATE => [ DEPENDENT_FIELD_SELECTOR => ['value' => DEPENDENT_FIELD_VALUE], ], ];

When adding the widget, the form API will generate markup similar to this:

<select data-drupal-selector="edit-field-sections-0-subform-field-image-or-video" id="edit-field-sections-0-subform-field-image-or-video--vtQ4eJfmH7k" name="field_sections[0][subform][field_image_or_video]" class="form-select required" required="required" aria-required="true"> <option value="Image" selected="selected">Image</option> <option value="Video">Video> </select>

So we need a selector like select[name="field_sections[0][subform][field_image_or_video]"] which can be generated using:

$selector = sprintf('select[name="%s[%d][subform][%s]"]', $paragraph_field_name, $element['#delta'], $dependee_field_name);

By using $element['#delta'] we ensure to apply the conditional field logic to the proper instance of the paragraph. This works when a field allows multiple paragraphs, including multiple instances of the same paragraph type.

Warning: Javascript behavior does not affect user input

It is very important to note that the form elements are hidden and shown via javascript. This does not affect user input. If, for example, a user selects image and uploads one then changes the selection to video and sets one then both the image and video will be stored. Switching the selection from image to video and vice versa does not remove what the user had previous uploaded or set. Once the node is saved, if there are values for the image and the video both will be saved. One way to work around this when rendering the node is to toggle field visibility in the node Twig template. In my session "Twig Recipes: Making Drupal 8 Render the Markup You Want" there is an example on how to do this. Check out the slide deck and the video recording for reference.

What do you think of this approach to add conditional field logic to paragraphs? Let me know in the comments.

PreviousNext: Composing Docker Local Development: Networking

Mon, 11/06/2017 - 21:47
Share:

Its extremely important to have default values that you can rely on for local Drupal development, one of those is "localhost". In this blog post we will explore what is required to make our local development environment appear as "localhost".

by Nick Schuch / 7 November 2017

In our journey migrating to Docker for local dev we found ourselves running into issues with "discovery" of services eg. Solr/Mysql/Memcache.

In our first iteration we used linking, allowing our services to talk to each other, some downsides to this were:

  • Tricky to compose an advanced relationship, lets use PHP and PanthomJS as an example:
    • PHP needs to know where PhantomJS is running
    • PhantomJS needs to know the domain of the site that you are running locally
    • Wouldn't it be great if we could just use "localhost" for both of these configurations?
  • DNS entries only available within the containers themselves, cannot run utilities outside of the containers eg. Mysql admin tool

With this in mind, we hatched an idea.....

What if we could just use "localhost" for all interactions between all the containers.

  • If we wanted to access our local projects Apache, http://localhost (inside and outside of container)
  • If we wanted to access our local projects Mailhog, http://localhost:8025 (inside and outside of container)
  • If we wanted to access our local projects Solr, http://localhost:8983 (inside and outside of container)

All this can be achieved with Linux Network Namespaces in Docker Compose.

Network Namespaces

Linux Network Namespaces allow for us to isolate processes into their own "network stacks".

By default, the following happens when a container gets created in Docker:

  • Its own Network Namespace is created
  • A new network interface is added
  • Provided an IP on the default bridge network

However, if a container is created and told to share the same Network Namespace with an existing container, they will both be able to interface with each other on "localhost" or "127.0.0.1".

Here are working examples for both OSX and Linux.

OSX

  • Mysql and Mail share the PHP containers Network Namespace, giving us "localhost" for "container to container" communication.
  • Port mapping for host to container "localhost"
version: "3" services: php: image: previousnext/php:7.1-dev # You will notice that we are forwarding port which do not belong to PHP. # We have to declare them here because these "sidecar" services are sharing # THIS containers network stack. ports: - "80:80" - "3306:3306" - "8025:8025" volumes: - .:/data:cached db: image: mariadb network_mode: service:php mail: image: mailhog/mailhog network_mode: service:php

Linux

All containers share the Network Namespace of the users' host, nothing else is required.

version: "3" services: php: image: previousnext/php:7.1-dev # This makes the container run on the same network stack as your # workstation. Meaning that you can interact on "localhost". network_mode: host volumes: - .:/data db: image: mariadb network_mode: host mail: image: mailhog/mailhog network_mode: host Trade offs

To facilitate this approach we had to make some trade offs:

  • We only run 1 project at a time. Only a single process can bind to port 80, 8983 etc.
  • Split out the Docker Compose files into 2 separate files, making it simple for each OS can have its own approach.
Bash aliases

Since we split out our Docker Compose file to be "per OS" we wanted to make it simple for developers to use these files.

After a couple of internal developers meetings, we came up with some bash aliases that developers only have to setup once.

# If you are on a Mac. alias dc='docker-compose -f docker-compose.osx.yml' # If you are running Linux. alias dc='docker-compose -f docker-compose.linux.yml'

A developer can then run all the usual Docker Compose commands with the shorthand dc command eg.

dc up -d

This also keeps the command docker-compose available if a developer is using an external project.

Simple configuration

The following solution has also provided us with a consistent configuration fallback for local development.

We leverage this in multiple places in our settings.php, here is 1 example:

$databases['default']['default']['host'] = getenv("DB_HOST") ?: '127.0.0.1';
  • Dev / Stg / Prod environments set the DB_HOST environment variable
  • Local is always the fallback (127.0.0.1)
Conclusion

While the solution may have required a deeper knowledge of the Linux Kernel, it has yielded us a much simpler solution for developers.

How have you managed Docker local dev networking? Let me know in the comments below.

Tagged Docker, Drupal Development

Posted by Nick Schuch
Sys Ops Lead

Dated 7 November 2017

Add new comment

Hook 42: Hook 42 at New England Drupal Camp

Mon, 11/06/2017 - 20:21

We're super excited to attend New England Drupal Camp this year!

Aimee is honored to have been invited to be the keynote speaker this year. She'll be discussing inclusion and diversity in the community. In addition to Aimee's keynote, we are partnering up with our longtime friends at Lingotek to put together a hands-on multilingual workshop that covers Drupal 8 and an integration to Lingotek's Translation Management System.

Just in case that wasn't enough, we're also presenting a couple of sessions. One comparing the madness of the multilingual modules on Drupal 7 to the new and improved Drupal 8 multilingual approach. We will be presenting another session covering how ANYONE and EVERYONE can help contribute back to the Drupal project even if they aren't the most advance technical person

Wim Leers: Rendering & caching: a journey through the layers

Mon, 11/06/2017 - 18:11

The Drupal render pipeline and its caching capabilities have been the subject of quite a few talks of mine and of multiple writings. But all of those were very technical, very precise.

Over the past year and a half I’d heard multiple times there was a need for a more pragmatic talk, where only high-level principles are explained, and it is demonstrated how to step through the various layers with a debugger. So I set out to do just that.

I figured it made sense to spend 10–15 minutes explaining (using a hand-drawn diagram that I spent a lot of time tweaking) and spend the rest of the time stepping through things live. Yes, this was frightening. Yes, there were last-minute problems (my IDE suddenly didn’t allow font size scaling …), but it seems overall people were very satisfied :)

Have you seen and heard of Render API (with its render caching, lazy builders and render pipeline), Cache API (and its cache tags & contexts), Dynamic Page Cache, Page Cache and BigPipe? Have you cursed them, wondered about them, been confused by them?

I will show you three typical use cases:

  1. An uncacheable block
  2. A personalized block
  3. A cacheable block that you can see if you have a certain permission and that should update whenever some entity is updated

… and for each, will take you on the journey through the various layers: from rendering to render caching, on to Dynamic Page Cache and eventually Page Cache … or BigPipe.

Coming out of this session, you should have a concrete understanding of how these various layers cooperate, how you as a Drupal developer can use them to your advantage, and how you can test that it’s behaving correctly.

I’m a maintainer of Dynamic Page Cache and BigPipe, and an effective co-maintainer of Render API, Cache API and Page Cache.

Preview:

Slides: Slides with transcriptVideo: YouTubeConference: Drupalcon ViennaLocation: Vienna, AustriaDate: Sep 28 2017 - 14:15Duration: 60 minutesExtra information: 

See https://events.drupal.org/vienna2017/sessions/rendering-caching-journey-through-layers.

Attendees: 200

Evalutations: 4.6/5

Thanks for the explanation. Your sketches about the rendering process and how dynamic cache, page cache and big pipe work together ; are awesome. It is very clear no for me.


Best session for me on DC. Good examples, loved the live demo, these live demo’s are much more helpful to me as a developer then static slides. General comments, not related to the speaker. The venue was to small for this talk and should have been on a larger stage. Also the location next to the exhibition stands made it a bit noisy when sitting in the back.


Great presentation! I really liked the hand-drawn figure and live demo, they made it really easy to understand and follow. The speaking was calm but engaging. It was great that you were so flexible on the audience feedback.

ThinkShout: My First BADCamp

Mon, 11/06/2017 - 12:30

We’re fresh off of BADCamp (Bay Area Drupal Camp), and we’re eager to share our experience with you! If you’ve ever thought about going to one of the local Drupal Camps in your area, or attending BADCamp yourself, we hope our takeaways persuade you to seek this out as a professional development opportunity.

BADCamp is essentially three days of intense workshops and sessions for Drupal users to hone their skills, meet other open source contributors, and make valuable connections in the community. Amongst the ThinkShout team, two had never attended BADCamp before. We were eager to hear their perspective on the conference and their key takeaways.

Sessions they attended ranged from learning about component-based theming tools, object oriented php, module development, debugging JavaScript; to Drupal 9 and backward compatibility and the importance of upgrading to D8 now.

Let’s hear from Mario and Lui–I mean Amy and Jules, on what their first BADCamp experience was like!

Amy and Jules on Halloween. Costumes are not required at BADCamp.

What did you learn at BADCamp?

Amy: Component-based theming is a hot topic these days for those building sites due to a number of reasons. Here are a couple of them:

  • It encourages a DRY (Don’t Repeat Yourself) and more organized theming code base.
  • It decouples site building in such a way that backend and frontend developers can work on the site at the same time, rather than the backend code needing to be built first before the frontend developer can do their work.
  • It provides clients with an interactive experience of their site (including responsiveness) before the database and backend elements are hooked up to it. This allows the client more time to provide feedback in case they want to change behaviors before they’re completely built.

I also attended a session called: React, GraphQL, and Drupal. This talk was largely about an opportunity to create multiple suites using the same API. The team used “headless Drupal” (to serve as the API), React.js to build the sites, and GraphQL to explore data coming from the API in a much more direct and clear way. It seemed like a great solution for a tricky problem, in addition to giving this team the opportunity to learn and use cutting edge technologies - so much fun!

Jules: I learned a lot about the Drupal Community. This was my first BADCamp, and also my first Drupal conference. I was excited about how generous the community is with knowledge and tools, working together so we can succeed together.

I learned about some of the changes to Drupal releases from @Webchick’s talk (Drupal 9 and Backward Compatibility: Why Now is the Time to Upgrade to Drupal 8). If I keep up with the incremental point releases (ie: 8.x), upgrading to 9 should be pretty painless, which is a relief. Knowing the incremental releases will be coming out with a regular six month-ish cadence will make planning easier. I’m also excited about the new features in the works; including Layouts, Work Spaces, a better out of the box experience on first install, a better UI admin experience (possibly with React?).

What would you tell someone who is planing to attend BADCamp next year?

Amy: Definitely invest in attending the full-day sessions if they interest you. The information I took away from my Pattern Lab day was priceless, and I came back to ThinkShout excited and empowered to figure out a way to make component based theming part of our usual practice.

Jules: The full day sessions were a great way to dive into deeper concepts. It’s hard to fully cover a subject in a shorter session. It also helps to show up with an open mind. It’s impossible to know everything about Drupal, and there are so many tools available. It was valuable just meeting people and talking to them about their workflows, challenges, and favorite new tools.

Do you consider BADCamp to be better for networking, professional development, or both?

Amy: My big focus was on professional development. There were so many good training days and sessions happening that those filled my schedule almost entirely. Of course, attending sessions (and being a session speaker!) is a great way to network with like-minded people too.

Jules: My goal was to immerse myself in the Drupal community. Since I’m new to Drupal, the sessions were really valuable for me. Returning with more experience, that might not be the case. It was valuable to see new ideas being presented, challenged, discussed, and explored with mutual respect and support. We’re all in this together. Some talks were stronger than others, but every speaker had a nugget of gold I could take with me. It was encouraging to meet peers and to see all of the great work people are doing out in the world. It also served as a reminder that great strides can come from many small steps (or pushes)!

Make time to learn

It can be difficult to take time away from project work and dedicate yourself to two or three days of conferencing. But when you disconnect and dive into several days of leaning, it makes your contributions back at the office invaluable. As Jules commented to me after her first day of sessions, “it was like php church!”

Getting out of your usual environment and talking to other people opens your mind up to other ways of problem solving, and helps you arrive at solutions you otherwise wouldn’t get through sitting in your cubicle. We hope you’re inspired to go to a local Drupal Meetup or Camp – or even better, meet us at DrupalCon or NTC’s Drupal Day!

Agiledrop.com Blog: AGILEDROP: Why rejecting projects due to resourcing challenges is avoidable

Mon, 11/06/2017 - 10:49
Even though I have been with AGILEDROP for little over than three months now, I already found myself in a situation when two of our potential clients were on the verge of declining their clients. The reasons for that were different, I'll go into more detail later. The agencies we approached differed in size, one being bigger (more than 50 people) the other smaller (less than 10 people). And the challenges they faced were also different. As you will see we could help both of them, but in the end, only one of the agencies trusted us that we are capable of delivering.  From a simple… READ MORE

OSTraining: How to Highlight the Differences Detween Two Images with Zurb Twenty Twenty Module

Mon, 11/06/2017 - 08:41

Zurb TwentyTwenty module is mostly intended to highlight the difference between two images on a Drupal site. You certainly saw those advertising images for skin products, for example. 

They would present half of the face before applying the product and half of the face after applying it. Besides such comparisons, you can use this module for other purposes as well. In this tutorial, you will learn how Zurb TwentyTwenty module works.

Appnovation Technologies: My First Book - Drupal 8 Module Development (Or Where I Have Been Lately)

Mon, 11/06/2017 - 08:00
My First Book - Drupal 8 Module Development (Or Where I Have Been Lately) If you’ve been wondering where I’ve been and why I haven’t been writing any articles lately, I am here to put your mind at ease: I've been working heavily on my first book about Drupal, called Drupal 8 Module Development. And I am happy to announce that it has finally been published and is available for purch...

fluffy.pro. Drupal Developer's blog: Install the latest version of a composer programmatically

Sun, 11/05/2017 - 14:53
If you need to install the latest version of a composer you can use next bash snippet:EXPECTED_SIGNATURE=$(wget -q -O - https://composer.github.io/installer.sig) && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php -r "if (hash_file('SHA384', 'composer-setup.php') === '${EXPECTED_SIGNATURE}') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer
Using EXPECTED_SIGNATURE variable with the latest available signature value you don't have to hardcode a specific one for comparison on 3rd line.

Agaric Collective: Using CKEditor plugins in Drupal 8

Fri, 11/03/2017 - 17:21

CKEditor is well-known software with a big community behind it and it already has a ton of useful plugins ready to be used. It is the WYSIWYG text editor which ships with Drupal 8 core.

Unfortunately, the many plugins provided by the CKEditor community can't be used directly in the CKEditor that comes with Drupal 8. It is necessary to let Drupal know that we are going to add a new button to the CKEditor.

Why Drupal needs to know about our plugins

Drupal allows us to create different text formats, where depending on the role of the user (and so what text formats they have available) they can use different HTML tags in the content. Also, we can decide if the text format will use the CKEditor at all and, if it does, which buttons will be available for that text format.

That is why Drupal needs to know about any new button, so it can build the correct configuration per text format.

Adding a new button to CKEditor

We are going to add the Media Embed plugin, which adds a button to our editor that opens a dialog where you can paste an embed code from YouTube, Vimeo, and other providers of online video hosting.

First of all, let's create a new module which will contain the code of this new button, so inside the /modules/contrib/ folder let's create a folder called wysiwyg_mediaembed. (If you're not intending to share your module, you should put it in /modules/custom/— but please share your modules, especially ones making CKEditor plugins available to Drupal!)

cd modules/contrib/ mkdir wysiwyg_mediaembed

And inside let's create the info file: wysiwyg_mediaembed.info.yml

name: CKEditor Media Embed Button (wysiwyg_mediaembed) type: module description: "Adds the Media Embed Button plugin to CKEditor." package: CKEditor core: '8.x' dependencies: - ckeditor

Adding this file will Drupal allows us to install the module, if you want to read more about how to create a custom module, you can read about it here.

Once we have our info file we just need to create a Drupal plugin which will give info to the CKEditor about this new plugin, we do that creating the following class:

touch src/Plugin/CkEditorPlugin/MediaEmbedButton.php

With this content:

namespace Drupal\wysiwyg_mediaembed\Plugin\CKEditorPlugin; use Drupal\ckeditor\CKEditorPluginBase; use Drupal\editor\Entity\Editor; /** * Defines the "wysiwyg_mediaembed" plugin. * * @CKEditorPlugin( * id = "mediaembed", * label = @Translation("CKEditor Media Embed Button") * ) */ class MediaEmbedButton extends CKEditorPluginBase { /** * Get path to library folder. * The path where the library is, usually all the libraries are * inside the '/libraries/' folder in the Drupal root. */ public function getLibraryPath() { $path = '/libraries/mediaembed'; return $path; } /** * {@inheritdoc} * Which other plugins require our plugin, in our case none. */ public function getDependencies(Editor $editor) { return []; } /** * {@inheritdoc} * The path where CKEditor will look for our plugin. */ public function getFile() { return $this->getLibraryPath() . '/plugin.js'; } /** * {@inheritdoc} * * We can provide extra configuration if our plugin requires * it, in our case we no need it. */ public function getConfig(Editor $editor) { return []; } /** * {@inheritdoc} * Where Drupal will look for the image of the button. */ public function getButtons() { $path = $this->getLibraryPath(); return [ 'MediaEmbed' => [ 'label' => $this->t('Media Embed'), 'image' => $path . '/icons/mediaembed.png', ], ]; } }

The class's code is pretty straightforward: it is just a matter of letting Drupal know where the library is and where the button image is and that's it.

The rest is just download the library and put it in the correct place and activate the module. If all went ok we will see our new button in the Drupal Text Format Page (usually at: /admin/config/content/formats).

This module was ported because we needed it in a project, so if you want to know how this code looks all together, you can download the module from here.

Now that you know how to port a CKEditor plugin to Drupal 8 the next time you can save time using Drupal Console with the following command:

drupal generate:plugin:ckeditorbutton

What CKEditor plugin are you going to port?

Lullabot: Decoupled Drupal Hard Problems: Schemas

Fri, 11/03/2017 - 15:59

The Schemata module is our best approach so far in order to provide schemas for our API resources. Unfortunately, this solution is often not good enough. That is because the serialization component in Drupal is so flexible that we can’t anticipate the final form our API responses will take, meaning the schema that our consumers depend on might be inaccurate. How can we improve this situation?

This article is part of the Decoupled hard problems series. In past articles we talked about request aggregation solutions for performance reasons, and how to leverage image styles in decoupled architectures.

TL;DR
  • Schemas are key for an API's self-generated documentation
  • Schemas are key for the maintainability of the consumer’s data model.
  • Schemas are generated from Typed Data definitions using the Schemata module. They are expressed in the JSON Schema format.
  • Schemas are statically generated but normalizers are determined at runtime.
Why Do We Need Schemas?

A database schema is a description of the data a particular table can hold. Similarly an API resource schema is a description of the data a particular resource can hold. In other words, a schema describes the shape of a resource and the datatype of each particular property.

Consumers of data need schemas in order to set their expectations. For instance, the schema tells the consumer that the body property is a JSON object that contains a value that is a string. A schema also tells us that the mail property in the user resource is a string in the e-mail format. This knowledge empowers consumers to add client-side form validation for the mail property. In general, a schema will help consumers to have prior understanding of the data they will be fetching from the API, and what data objects they can write to the API.

We are using the resource schemas in the Docson and Open API to generate automatic documentation. When we enable JSON API and  Open API you get a fully functional and accurately documented HTTP API for your data model. Whenever we make changes to a content type, that will be reflected in the HTTP API and the documentation automatically. All thanks to the schemas.

A consumer could fetch the schemas for all the resources it needs at compile time or fetch them once and cache them for a long time. With that information, the consumer can generate its models automatically without developer intervention. That means that with a single implementation once, all of our consumers’ models are done forever. Probably, there is a library for our consumer’s framework that does this already.

More interestingly, since our schema comes with type information our schemas can be type safe. That is important to many languages like Swift, Java, TypeScript, Flow, Elm, etc. Moreover if the model in the consumer is auto-generated from the schema (one model per resource) then minor updates to the resource are automatically reflected in the model. We can start to use the new model properties in Angular, iOS, Android, etc.

In summary, having schemas for our resources is a huge improvement for the developer experience. This is because they provide auto-generated documentation of the API, and auto-generated models for the consumer application.

How We Are Generating Schemas In Drupal?

One of Drupal 8's API improvements was the introduction of the Typed Data API. We use this API to declare the data types for a particular content structure. For instance, there is a data type for a Timestamp that extends an Integer. The Entity and Field APIs combine these into more complex structures, like a Node.

JSON API and REST in core can expose entity types as resources out of the box. When these modules expose an entity type they do it based on typed data and field API. Since the process to expose entities is known, we can anticipate schemas for those resources.

In fact, assuming resources are a serialization of field API and typed data is the only thing we can do. The base for JSON API and REST in core is Symfony's serialization component. This component is broken into normalizers, as explained in my previous series. These normalizers transform Drupal's inner data structures into other simpler structures. After this transformation, all knowledge of the data type, or structure is lost. This happens because the normalizer classes do not return the new types and new shapes the typed data has been transformed to. This loss of information is where the big problem lies with the current state of schemas.

The Schemata module provides schemas for JSON API and core REST. It does it by serializing the entity and typed data. It is only able to do this because it knows about the implementation details of these two modules. It knows that the nid property is an integer and it has to be nested under data.attributes in JSON API, but not for core REST. If we were to support another format in Schemata we would need to add an ad-hoc implementation for it.

The big problem is that schemas are static information. That means that they can't change during the execution of the program. However, the serialization process (which transforms the Drupal entities into JSON objects) is a runtime operation. It is possible to write a normalizer that turns the number four into 4 or "four" depending if the date of execution ends in an even minute or not. Even though this example is bizarre, it shows that determining the schema upfront without other considerations can lead to errors. Unfortunately, we can’t assume anything about the data after its serialized.

We can either make normalization less flexible—forcing data types to stay true to the pre-generated schemas—or we can allow the schemas to change during runtime. The second option clearly defeats the purpose of setting expectations, because it would allow a resource to potentially differ from the original data type specified by the schema.

The GraphQL community is opinionated on this and drives the web service from their schema. Thus, they ensure that the web service and schema are always in sync.

How Do We Go Forward From Here

Happily, we are already trying to come up with a better way to normalize our data and infer the schema transformations along the way. Nevertheless, whenever a normalizer is injected by a third party contrib module or because of improved normalizations with backwards compatibility the Schemata module cannot anticipate it. Schemata will potentially provide the wrong schema in those scenarios. If we are to base the consumer models on our schemas, then they need to be reliable. At the moment they are reliable in JSON API, but only at the cost of losing flexibility with third party normalizers.

One of the attempts to support data transformations and the impact they have on the schemas are Field Enhancers in JSON API Extras. They represent simple transformations via plugins. Each plugin defines how the data is transformed, and how the schema is affected. This happens for both directions, when the data goes out and when the consumers write back to the API and the transformation needs to be reversed. Whenever we need a custom transformation for a field, we can write a field enhancer instead of a normalizer. That way schemas will remain correct even if the data change implies a change in the schema.

undefined

We are very close to being able to validate responses in JSON API against schemas when Schemata is present. It will only happen in development environments (where PHP’s asserts are enabled). Site owners will be able to validate that schemas are correct for their site, with all their custom normalizers. That way, when a site owner builds an API or makes changes they'll be able to validate the normalized resource against the purported schema. If there is any misalignment, a log message will be recorded.

Ideally, we want the certainty that schemas are correct all the time. While the community agrees on the best solution, we have these intermediate measures to have reasonable certainty that your schemas are in sync with your responses.

Join the discussion in the #contenta Slack channel or come to the next API-First Meeting and show your interest there!

Hero photo by Oliver Thomas Klein on Unsplash.

InternetDevels: Responsive images in Drupal 8: beautiful on every device!

Fri, 11/03/2017 - 12:48

When does “smaller” mean “bigger”? When your images grow smaller to perfectly adjust themselves to various devices, while your user satisfaction, audience coverage, website’s speed, and profits grow bigger. A nice formula, isn’t it? This magic ability of images to adjust themselves to screens is how responsive web design works. And it works especially well in the latest Drupal version, Drupal 8, which has built-in support for responsive images.

Read more

Agiledrop.com Blog: AGILEDROP: Why should agencies focus on building ambitious websites

Fri, 11/03/2017 - 11:35
Dries Buytaert, the founder of Drupal, gave great session this year at Drupalcon Vienna. Watch the part where he talks about who is Drupal for. Instead of focusing on big and small websites, or SME and enterprise clients, Dries describes the type of a website Drupal is made for as ambitious.  What is not an ambitious website A business that used to have a simple brochure website is now better off being served by SaaS (software as a service) solutions like Wix and Squarespace. Facebook, Google, and Amazon are providing services that not only cover what a good-old-website did in the past, but… READ MORE

Appnovation Technologies: Appnovator Spotlight: Meet Victoria Marcos

Fri, 11/03/2017 - 07:00
Appnovator Spotlight: Meet Victoria Marcos Who are you? What's your story? My name is Victoria Marcos, I’m from Venezuela and moved to England 8 years ago. I’m married and have a beautiful dog called Bonnie. I’ve been working in Appnovation for 3.5 years as Project Manager. I have a degree in Computer Engineering and a Master in Computer Science. I used to work as Business Analy...

OSTraining: What Does Delta Mean in Drupal?

Fri, 11/03/2017 - 04:44

When you are adding Views, you may have seen an extra option called "Delta".

Several students have asked us about the purpose of this field, because it wasn't clear.

The Delta option is available throughout the site, but ordinary users are most likely to encounter it inside Views. Here's how the "Delta" options appear in Views:

Pages