Trip Report on Big Graph Processing Systems Dagstuhl Seminar

As always, it is a tremendous honor to be invited to a Dagstuhl seminar. Last week, I attended the seminar on “Big Graph Processing Systems

During the first day, every participant presented where they were coming from and what is their research interest for 5 min. There was an interesting mix of large scale processing and graph database systems researchers with a handful of theoreticians. My goal was to push for the need of getting users involved in the data integration process and I believe I accomplished my goal.

The organizers pre-selected three areas to ground the discussions: 

Abstraction: While imperative programming models, such as vertex-centric or edge-centric programming models, are popular, they are lacking a high-level exposition to the end user. To increase the power of graph processing systems and foster the usage of graph analytics in applications, we need to design high-level graph processing abstractions. It is currently completely open how future declarative graph processing abstractions could look like.

Ecosystems: In modern setups, graph-processing is not a self-sustained, independent activity, but rather part of a larger big-data processing ecosystem with many system alternatives and possible design decisions. We need a clear understanding of the impact and the trade-offs of the various decisions in order to effectively guide the developers of big graph processing applications.

Performance: Traditionally, performance and scalability are measures of efficiency, e.g. FLOPS, throughput, or speedup, are difficult to apply for graph processing, especially since performance is non-trivially dependent on platform, algorithm, and dataset. Moreover, running graph-processing workloads in the cloud leverages additional challenges. Such performance-related issues are key to identify, design, and build upon widely recognized benchmarks for graph processing.

I participated in the Abstractions group because it touches more on topics of my interest such as graph data models, schemas, etc. Thus this report only takes into account the discussions I had in this group. 

Setting the Stage

During a late night wine conversation with Marcelo Arenas (wine and late night conversations is crucial aspect at Dagstuhl), we talked about the two kinds of truth: 

“An ordinary truth is a statement whose opposite is a falsehood. A profound truth is a statement whose opposite is also a profound truth” – Niels Bohr

If we apply this to a vision, we can consider an ordinary vision and a profound vision. 

An example of an ordinary vision: we need to make faster graph processing systems. This is ordinary because the opposite is false: we would not want to design slower graph processing system. 

With this framework in mind, we should be thinking about profound visions. 

Graph Abstractions

There seems to be an understanding, and even an agreement in the room, that graphs are a natural way of representing data. The question is WHY?

Let’s start with a few observations: 

Observation 1: there has been numerous types of data models and corresponding query languages. Handwaving, we can group these into tabular, graph, and tree, with so many different flavors. 

Observation 2: What goes around comes around. We have seen many data models come and go several times in the past 50 years. See the Survey of Graph Database Models by Renzo Angles and Claudio Gutierrez and even our manuscript on the History of Knowledge Graphs

So, why do we keep inventing new data models? 

Two threads came out of our discussions

1) Understand the relationship between data models

Over time, there has been manifold data models. Even though the relational model continues to be the strongest, graph data models have increasing popularity, specifically RDF Graphs and Property Graphs. And who knows, tomorrow we may have new data models that will gain force. With all of these data models, it is paramount to understand how these models relate amongst each other. 

We have seen approaches that study how these data models relate to each other. During the 90s, there was a vast amount of work of connecting XML (tree data model) with the relational data model. The work that we did on mapping relational data to RDF graphs, which led to the foundation of the W3C RDB2RDF Direct Mapping standard. The work of Olaf Hartig on RDF* that maps RDF Graphs with Property Graphs.

These approaches have the same intent: understand the relationship between data model A and B. However, all of these independent approaches are disconnected?

The question is: what is a principled approach to understand the relationship between different data models? 

Many questions come to mind: 

  • How do we create mappings between different data models? 
  • Or should we create a dragon data model that rules them all, such that all data models can be mapped to the dragon data model? If so, what are all the abstract features that a data model should support? 
  • What is the formalism to represent mappings? Logic? Algebra? Category Theory? 
  • What are the properties that mappings should have? Information, Query and Semantics preserving, composability, etc.

2) Understand the relationships between data models and query languages with users

It is our understanding (“our feeling”) that a graph data model should be the ultimate data model for data integration. 

Why? 

Because graphs bridge the conceptualization gap between how end users think about data and how data is physically stored. Over and over again we were say that “graphs are a natural way of representing data”. 

But, what does natural even mean? Natural to whom? For what? 

Our hypothesis is that the lack of understanding between data and users is the reason why we keep inventing new data models and query languages. We really need to understand the relationship between data models and query languages with users. We need to understand how users perceive the way data is modeled and represented. We need to work with scientists and experts from other communities to design methodologies, experiments and user studies. We also need to work with users from different fields (data journalist, political scientist, life science, etc.) to understand the users intents. 

Bottomline, we need to realize that user studies are important and we need to work with the right people.

This trip report barely scratches the surface. There were so many other discussions that I wish I was part of. We are all working on a vision paper that will be published as a group. We are expecting to have a public draft by March 2020.

Overall, this was a fantastic week and the organizers did a phenomenal job.


Gra.fo, a visual, collaborative, real-time ontology and knowledge graph schema editor

A common frustration we’ve encountered is the lack of adequate tooling around ontology and knowledge graph schema design. Many tools exist– some are overly complex, some are very expensive, and none allow one to work visually, collaboratively and in real time on a document with multiple concurrent users.

That is why two years ago we took half a dozen Capsenta engineers and set off to design and develop a solution. Our design principles were:

Real World: Focus on solving problems that we encounter in the real world with our customers and users.
Understand our audience: We work with data geeks but also business and domain experts– our solution has to satisfy them both.
Minimal: Let’s not boil the ocean; be focused and practical.
Do not reinvent the wheel: A lot of great research and scientific results exist that should be leveraged.

With that in mind, we are very pleased to announce the official launch of Gra.fo, a visual, collaborative, and real-time ontology and knowledge graph schema editor. It is the only editor where you can:

– Visually design your schema by dragging and dropping Concepts, Attributes, Relationships and Specializations– what we lovingly refer to as C/A/R/S.
– Share your document with other users and grant view-only, commenting, or editing permissions.
– Collaborate in real-time with multiple users.
– Comment on individual C/A/R/S with other users.
– Search for C/A/R/S.
– Track document history, name a version and restore to any previous version.

E-commerce Knowledge Graph Schema in Gra.fo

Like other knowledge graph editing software, Gra.fo lets you Import and Export existing RDF/OWL ontologies. Property Graphs are also supported. For an overview of Gra.fo’s features, check out this short demo:

After two years of stealth mode, we officially launched Gra.fo last week at the 17th International Semantic Web Conference (ISWC2018) in Monterey. The reception was universally positive. It was also very cool to demo Gra.fo to Sir Tim Berners-Lee!

Demoing Gra.fo to Sir Tim Berners Lee

Discussions with Sir Tim Berners-Lee on Gra.fo and Solid.

Gra.fo is currently the only visual, collaborative, and real-time ontology and knowledge graph schema editor in existence. But that’s not good enough for us. We want to be the best knowledge graph editor, period! Toward that end we are actively working on several new features with more to come:

– Git: Commit and push your document to Git.
– Import Mapping: View R2RML mappings within Gra.fo.
– API: Access and build knowledge graphs programmatically.

Please check it out at https://gra.fo/. We offer one month gratis at the Team level so feel free to try out all that Gra.fo offers. Serving the community is important to us so please let us know what you need!

This is just the beginning!

12th Alberto Mendelzon Workshop on Foundations of Data Management in Cali, Colombia

This week, the 12th Alberto Mendelzon Workshop on Foundations of Data Management (AMW2018) takes place in Cali, Colombia. I have been coming to AMW since Cartagena, Colombia in 2014 and it has always been a fabulous event: strong scientific program, fun people and great organizations. I’m incredibly humbled, honored and excited to be the General Chair of AMW2018. It is a tremendous distinction to have the opportunity to be part of this distinguished event and share with my database friends and colleagues in Cali, Colombia, which is a place I call home.
AMW2018 has a fantastic program this week, and it is all thanks to the PC Chairs, Dan Olteanu (University of Oxford) and Barbara Poblete (University of Chile), the School Chairs, Jarek Szlichta (University of Ontario) and Domagoj Vrgoč (PUC Chile) and the authors of the 29 papers that will be presented.
The AMW School takes place during the first two days of the week with four tutorials:

– Denis Parra (PUC Chile): A Tutorial on Information Visualization and Visual Analytics

– Miriam Fernandez (Open University): Introduction to Mining Social Media Data

– Martin Ugarte (Free University of Brussels): Understanding the Bitcoin Protocol

– Fei Chiang (McMaster University): Introduction to Data Quality

The workshop takes place Wednesday through Friday. The 29 papers are organized in five sessions: data analytics, reasoning, query answering, and incomplete and probabilistic databases. All the papers have now been published in CEUR Workshop Proceedings Vol 2100. The workshop also has 4 keynotes that ranges from theory to practice topics:

– Miriam Fernandez (Open University) on AI for policing

– Hung Ngo (RelationalAI Inc) on worst-case optimal join algorithms

– Pablo Barcelo (University of Chile) on on reverse engineering problems for database query languages

– Vanessa Murdock (Amazon) on on large-scale analysis of user engagement

I am extremely grateful to the sponsors and the local organizing committee. We received generous support from VLDB Endowment and Cafeto Software. Thanks to their support, we are able to fund 25 students! The local organizers, Andres Castillo (Universidad del Valle) and Maria Constanza Pabon (Pontificia Universidad Javeriana Cali) have been a tremendous support on making this event happen.
Last but not least, given that AMW2018 is taking place in Cali, Colombia, this organization has become a family affair. My parents, and specially my mother, have been right by my side to make sure AMW2018 is a success. This event is taking place thanks to my mother! Gracias mami!
Hopefully you are now tempted to come to the next AMW, which will be somewhere in Latin America!

A Summer of Computer Science, Research, Semantic Web, Databases, Graphs and Travel!

This has been a summer of Computer Science, Research, Semantic Web, Databases, Graphs and a lot of travel! In these past 4 months, I visited 10 countries and traveled over 72,000 miles; equivalent to going around the world 3 times. Whew! This is the summary of my summer travel. 

Montevideo
I attended the 11th Alberto Mendelezon Workshop on Foundations of Data Management. AMW is a scientific event with with a heavy attendance from database theory researchers. The hallway discussions are very insightful. I was the organizer of the Summer School and a presented a short paper titled “Ontology Based Data Access: Where do the Ontologies and Mappings come from?”I had a lot of enlightening conversations with Luna Dong from Amazon (working on creating the Product Knowledge Graph. Semantic web is involved), Julia Stoyanovich who gave a real thought provoking tutorial on Data Responsibility (we should all pay attention to this), Leonid Libkin (Nulls in databases are still an issue). I was thrilled to finally meet Dan Suciu, James Shanahan, Jan Van den Bussche among many other database gurus. It’s always a pleasure to hang out with the chilean database “mafia”: Marcelo Arenas, Pablo Barcelo, Leo Bertossi, Claudio Gutierrez, Aidan Hogan et al. Congrats to the local team for organizing a wonderful event, specially to Mariano Consens!
Buenos Aires
I flew into Montevideo and I was flying out from Buenos Aires. I got to spend a day and a half in this great city. I truly enjoyed it. I will have to come back! Blog post about my 36 hour visit to Buenos Aires will come soon.
San Francisco
I attended Graph Day where I had two talks “Do I need a Graph Database? If so, what kind?” and “Graph Query Language Task Force Update from LDBC”. My takeaways:
– AWS is figuring out what to do with Graphs

– Uber is creating a Knowledge Graph
– Stardog, was the only RDF graph database company there. They are growing and very direct with their material: if you are doing data integration, you should be using RDF.
– Multi-model databases are growing: Datastaxs, ArangoDB, OrientDB, and Microsoft’s latest release of CosmosDB
– New Graph databases: JanusGraph, Dgraph, AgensGraph
– openCypher is really pushing hard to be THE property graph query language standard

Germany
I attended the Dagstuhl Seminar “Federated Semantic Data Management”, organized by Johann-Christoph Freytag, Olaf Hartig and Maria Esther Vidal. On my way to Dagstuhl, I had the opportunity to stop in Koblenz to hang out with Steffen Staab.
We had extensive discussions on the state of the art in Federated Query Processing from the traditional Relational Databases and Semantic Web perspectives. The goal was to understand the limitations of current approaches in considering ontological knowledge during federated query processing. Federated Semantic Data Management (FSDM) can be summarized in one sentence: Being able to do 1) reasoning/inferencing over 2) unbounded/unknown sources. A couple interesting open challenges to highlight are the following:
1) Unbounded sources: In traditional federated data management, the number of sources is fixed. In FSDM, the number of sources may not be known. Therefore the source selection problem is harder.
2) Correctness: A relaxed version of correctness may need to be considered, a tradeoff between soundness/completeness and precision/recall.
3) Access control: This is still an open challenge even in traditional federated data management.
Switzerland
This is my third home. I try to swing by Zurich once a year. I spent a weekend at Bodensee and visited for the first time Säntis. I had the opportunity to visited Philippe Cudré-Mauroux at the University of Fribourg. We are the ISWC 2017 In-Use PC Chairs, so we had a face-to-face PC meeting. I also gave my talk “Integrating Relational Databases with the Semantic Web: past, present and future” for the first time. This talk is an 1 hour version of my upcoming lecture at the Reasoning Web Summer School in London.
Lisbon
What’s the best way to get from Zurich to London? Stopping for an entire day in Lisbon of course! Specially when you pay for the ticket with miles and $10USD. This was my first time in Lisbon. I arrived early morning, spent 6 hours walking around this amazing city. I also had the chance to have lunch with Sofia Pinto overlooking Lisbon and discuss ontology engineering! One of the best day layovers I have ever had. I have to come back. Blog post on the visit soon.
London
I was invited to be a lecturer of the 13th Reasoning Web Summer School (RW 2017). I delivered a half day lecture on Integrating Relational Databases with the Semantic Web. My lecture notes appear as a book chapter in the book Reasoning Web. Semantic Interoperability on the Web. It was great hanging out with good friends Axel Polleres, Andrea Cali. I finally got to meet for the first time Giorgos Stamou. Great conversations with Domenico Lembo on Ontology Based Data Access and Leo Bertossi on Inconsistent Databases and Data Quality. The highlight of this visit, and of my summer, was the conference dinner at the Royal Society where I sat next to Keith Clark and enjoyed a marvelous dinner speech by Bob Kowalski. Blog post on this event soon.
Toronto
Client work took my all the way to Toronto. First time in Canada! So if it’s hot in Texas, might as well try to spend time in a cooler place. This is a great weekend getaway destination (in the summer): fantastic views, food and beer. I also had the chance to meet up with Mariano Consens and get a tour of the University of Toronto.
Chile
The Graph Query Language task force from the Linked Data Benchmark Council (LDBC) organized a face-to-face week meeting in Santiago, Chile to work on the proposal for a closed graph query languages where paths are first class citizens. A full week of hard work (we also had fun). I took advantage of this visit to visit my UT friends Lindsey Carte, Alvaro Quezada-Hofflinger and Marcelo Somos, professors at the Universidad de La Frontera in Temuco. I gave a talk in spanish “Integrating Data using the Semantic Web: The Constitute Use Case”. It is enjoyable challenge to give talks to non-computer scientists.
Miami
Back in February I found a Austin-Miami roundtrip ticket for $110. So why not! We discovered the Barrel of the Monks brewery in Boca Raton. This is a must if you are in that area and you like belgium beers!
Greece
I was invited to attend the STI Summit in Crete. My first time in Crete, and in Greece (I have never attended ESWC which is usually in Crete). Very intense couple of days talking about the future of Semantic Web research. Afterwards I visited Irini Fundulaki at FORTH and Giorgos Stamou at the National Technical University of Athens where I gave my talk on Integrating Relational Databases with the Semantic Web. I was very impressed with all the work on mappings that has been done in both of these groups. In both cases, the one hour talk turned into hours and hours of fruitful discussions. On my flight to Athens I met a fellow travel geek:72hrJetsettergirl. The next day, we randomly bumped into each other at the Acropolis. The sweet coincidences of life!
Atlanta
I attended the ACM Richard Tapia Celebration of Diversity in Computing. I have been attending this conference for 10 years, since I was a senior in college, all throughout my graduate studies and now has a PhD. This year, I was the Workshop and Panel Chair.  I had the chance to moderated a panel “From Research to Startup” with Rachel Miller from Asana (from theory/crypto research to startup), Kunle Olukotun, (Stanford professor and founder of multiple startups) and Andy Konwinski (PhD from UC Berkley and co-founder Databricks). I also was on another entrepreneur panel with Ayana Howard (Professor at Georgia Tech and founder of Zyrobotics) and Jamika Burge. Both panels had a mix of undergrad, grad students and even faculty interested in learning and entrepreneur experiences. We definitely had an amazing group of panelists. Kemafor Anyanwu Ogan invited me to be on her panel of Data Management for IoT. One of the highlights of the conference is to meet with former and new members of Hispanics in Computing including Manuel Pérez Quiñones (congrats on the Richard A. Tapia Achievement Award for Scientific Scholarship, Civic Science and Diversifying Computing!) and Dan Garcia. We missed you Jose Morales and Patti Ordonez!
Netherlands
I’m writing this post on my way back from Amsterdam. I had the opportunity to meet up with Peter Boncz and talk about Graph Query Language use cases. I also gave my talk “Integrating Relational Databases with the Semantic Web” at the VU Weekly Artificial Intelligence meeting. Great crowd and a lot of great questions. Nice seeing Frank van Harmelen and Javier Fernandez.

The summer is well over. Fall is already in full force in Europe. But it is still feels like summer in Texas.

Is RDF a graph?

A graph consists of a set of vertices (nodes, points) and a set of edges (arcs, lines) between nodes. The common definition is  G = (V, E)  where V representes the set of vertices and E represents the edges between two vertices.

Commercially, there are two specific types of graph data models: Property Graph and RDF Graph.  A property graph is a graph where key-value pairs can be associated to vertices and edges. An RDF graph is a represented as a set of triples: subject, predicate, object where the subject and object are vertices and a predicate is an edge.

However, it seems that Jim Webber, Neo4J’s Chief Scientist does not acknowledge that RDF graphs are graphs:

My response and Jim’s follow up response:

and my response:

It is still unclear to me why Jim Webber believes RDF graphs are not graphs?

Jim, I’m in London this week. I would love to meetup, have a pint and chat about graphs!

A Refreshing, No-Fluff, No-Buzzword Perspective on Artificial Intelligence

I encountered this refreshing and excellent summary of Artificial Intelligence by John Launchbury, the Director of DARPA’s Information Innovation Office (I2O). Thanks Frank van Harmelen for posting this!

No fluff. No buzzwords. It is crisp and succinct explanation of the state of AI today and where it is going. Deep learning wasn’t even mentioned!

The quickly summary is that AI up to now can be summarize in two waves:

First Wave: Handcrafted Knowledge which is very good at reasoning but not very good a perceiving the outside world. It is not good for learning nor abstracting.

Handcrafted Knowledge: Enables reasoning over narrowly defined problems. No learning capability and poor handling of uncertainty

Second Wave: Statistical Learning which is good a perceiving and learning but it is not so good for reasoning and abstracting.

Statistical Learning: Nuanced classifications and predication capabilities. No contextual capability and minimal reasoning ability.

The next Wave, noted as Contextual adaptation is where systems can construct explanatory models that explain real world phenomena. 

My take away from this is that GOFAI (Good Old Fashion AI) is still active and relevant and by combining it with Machine Learning, we will enter the next wave of AI which can provide answers to the why (context).

The conclusion of this video is aligned with the takeaway message from Jim Hendler presentation at the 4th Heidelberg Laureate Forum (HLF): we need Human and AI together.

Hope you enjoy watching these videos as much as I did.

A Data Weekend in Austin

On the weekend of January 14-15, I attended Data Day Texas, Graph Day Texas and Data Day Health in Austin and gave three talks.

Do I need a Graph Database: This talk came out of a Q/A during a happy hour after a talk I gave at a meetup in Seattle. We were discussing when to use a Graph Database? What type of graphs should you use: RDF or Property Graph.

 

Graph Query Languages: This talk gave an update on the work we have been doing in the Graph Query Language (GQL) task force at the Linked Data Benchmark Council (LDBC). The purpose of the GQL task force is to study query languages specifically for the Property Graph data model because there is a need for a standard syntax and semantics of a query language. One of the main points I was arguing in this talk is the need of a closed language: graphs in, graphs out. One can argue that a reason for success of relational databases is because the query language is closed (tables in, tables out). With this principle, queries can be composed (i.e. views!). This talk was well received and generated a lot of interesting discussion, specially when Emil Eifrem, Neo Technologies’ CEO is in the room.  An interesting part of the discussion was if we are too early for standardization. Emil stated that we need standardization now because their clients are asking for it. I stated that graph databases today are in the mid 1980’s of relational databases, so time is about right to start the discussion. Andrew Donoho said I was too optimistic. He thinks we are in the late 70s and we are too early. I will be giving this talk next week at the Smart DataGraphorum conference, with some updated material. Special thanks to Marcelo Arenas, Renzo Angles and specially Hannes Voigt for helping me organize these slides.

Semantic Search Applied to Healthcare: In this talk, I introduced how we are identifying patients who are in need of Left Ventricular Assist Devices (LVADs) using Ultrawrap, the semantic data virtualization technology developed at Capsenta. This talk presented a use case with the Ohio State University Wexner Medical Center. Patients are being missed through traditional chart pull methods. Our approach has resulted in ~20% increase in detection over previously known population at OSU, which is a mature institution. This talk will also be given at the Smart Data conference.

Main highlights of the conference:

  • Emil Eifrem, CEO of Neo Technology gave the keynote. It was nice to learn the use cases where Neo4j is being used: Real-time recommendation, Fraud detection, Network and IT operations, Master Data Management, Graph-Based Search and Identity & Access Management. It was not clear why were graphs specifically used because these are use cases that have been around for a long time and have been addressed using traditional technologies. Emil ended talking about a “connected enterprise”, meaning integrating data across silos using graphs. If you take a look at my Do I need a graph database talk,  you will see that I argue to use RDF for data integration, not Property Graphs.
  • Luca Garulli, the founder and CEO of OrientDB gave a talk focusing on the need of a multi model database like OrientDB. In his talk, he argued for many features which Neo4J apparently didn’t support. Not long after, there was a good back-and-forth twitter discussion between Emil and Luca. Emil was correcting Luca. Seems like this talk may need to be updated. An interesting take away for me: how do you benchmark a multi model database?
  • Many talks about “I’m in relational, how do I get to property graphs”. All of them at an introductory level. Given that we have studied very well the problem of relational to RDF, this should be a problem that can be address quickly and efficiently.
  • Standards was a big topic, one of the reasons my Graph Query Language talk was well received. Neo4j is pushing for OpenCypher to become the standard, while in fact, one could argue that Gremlin is already the defacto standard. Before this weekend, I wasn’t aware of anybody implementing OpenCypher. Apparently there are now 10 OpenCypher implementation including Bitnine, Oracle and SAP HANA.
  • Bitnine: they are implementing a PropertyGraph DB on top of Postgres and using OpenCypher as the query language. They are NOT translating OpenCypher to SQL. Instead, they are doing the translation to relational algebra internally. I enjoyed the brief discussion with Kisung Kim, Bitnine’s CTO. Apparently they have already benchmarked with LDBC and did very well. Looking forward to seeing public results. Bitnine is open source.
  • Take a look at sql2gremlin.com
  • grakn.ai looks interesting. Need to take a closer look.
  • Cray extended the LUBM benchmark and added a social network for the students.
  • Property Graphs is what comes to mind when people thing about graph databases. However, an interesting observation is that the senior folks in the room prefer RDF than Property Graphs. We all agreed that RDF is more mature than Property Graph databases.
  • “Those who do not learn history are doomed to repeat it.” It is crucial to understand what has been done in the past in order to not re-invent the wheel. I feel lucky that early on in grad school, my advisor pushed me to read pre-pdf papers. It was great to meet this weekend with folks like Darrel Woelk and Misty Nodine who used to be part of MCC. A lot of the technologies we are seeing today has roots back to MCC. For example, we discussed how similar graph databases are to object oriented databases. On twitter, Emil seemed to disagree with me. Nevertheless we had an interesting twitter discussion.
  • Check out JanusGraph, a graph database, which if I understood correctly, is  a fork from Titan. Titan hasn’t been updated in over a year because the folks behind it are now at DataStax.

Thanks to Lynn Bender and co. for organizing such an awesome event! Can’t wait for it to happen in Austin next year. Recordings of the talks will start to show up on the Global Data Geek youtube channel.

Starting a blog!

One of my 2017 resolutions is to start writing again. I’m hoping a blog will help me achieve this goal.

I want to share thoughts about my geeky interests: computer science, research, semantic web, databases, semantic web, etc; and my non-geeky interests such as travel, miles and points, cheap flights, beer and wine 🙂 .

This should be an interesting smorgasbord of content!